Please ensure Javascript is enabled for purposes of website accessibility
Powered by Zoomin Software. For more details please contactZoomin

AVEVA™ Unified Engineering

Hyperparameters values for predAi ml​.net algorithms:

Hyperparameters values for predAi ml​.net algorithms:

  • Last UpdatedFeb 04, 2026
  • 3 minute read

Non-Tree Based Algorithms

Non-tree-based algorithms are machine learning approaches that operate without decision trees. They learn by optimizing mathematical functions. For example, identifying the best line or curve that fits the data. Examples include linear and logistic regression.

Non-tree-based algorithms include LBFGS and SCDA. Here are the hyperparameters.

Lbfgs(Limited-memory Broyden–Fletcher–Goldfarb–Shanno

Name

Value

Description

Source

Range

HistorySize

20

Determines how many previous steps the algorithm retains to improve the current update.

Value taken from ML.NET DLL

[10,50]

NumberofThreads

CPU cores

Specifies how many processor cores are used in parallel to speed up training.

Value taken from ML.NET DLL

[1,cpucore]

L1Regularisation

1F

Controls how strongly the model is encouraged to push certain values to zero for simplicity.

Value taken from ML.NET DLL

[0-10]

L2Regularisation

1F

Determines the degree in which the model keeps values small and balanced.

Value taken from ML.NET DLL

[0-10]

Optimization Tolerance

0.0000001 or 1e-7

Defines how close the algorithm aims to get to the optimal solution before stopping.

Value taken from ML.NET DLL

[1e-4,1e-7]

Enforce Nonnegativity

false

Specifies whether the model is constrained to use only non-negative values.

Value taken from ML.NET DLL

False(best)

Scda(Stochastic Coordinate Descent Algorithm)

Name

Value

Description

Source

Range

L2Regularisation

0.001

Regulates the degree in which the model keeps values small and well-balanced to prevent overfitting.

ElasticNet — scikit-learn 1.7.1 documentation

[0.001-10]

L1Regularisation

0.5

Controls the degree in which the model keeps its values to zero for a cleaner solution.

ElasticNet — scikit-learn 1.7.1 documentation

[0,10]

NumberOfThreads

1

Specifies how many processor cores are used in parallel, to speed up training.

standard

[1]

ConvergenceTolerance

0.1F

Defines the level of accuracy the algorithm must reach before it stops.

Value taken from ML.NET DLL

[1e-6 , 0.1]

ConvergenceCheckFrequency

cpucores

Specifies how frequently the algorithm checks whether it is close enough to the optimal solution.

Value taken from ML.NET DLL

[1,cpucore]

Tree-based algorithms

Tree-based algorithms are Fasttree, fastforest and lightgbm

Fasttree

Name

Value

Description

Source

Range

FeatureFraction

1.0

Portion of features used in each iteration; 1.0 means use all features every time.

Value taken from ML.NET DLL

[0.5,1]

FeatureFirstUsePenalty

0.1

Penalty applied when a feature is used for the first time to encourage feature reuse.

standard

[0,5]

NumberOfTrees

100

Specifies the number of decision trees the model builds; more trees can improve accuracy but increase training time.

Value taken from ML.NET DLL

[50,1000]

NumberOfLeaves

20

Sets the maximum number of leaves allowed per tree, which controls the tree’s complexity.

Value taken from ML.NET DLL

[10,500]

MinimumExampleCountPerLeaf

10

Minimum number of data samples required in a leaf to avoid overfitting.

Value taken from ML.NET DLL

[1,50]

FeatureFraction

1.0

Specifies the fraction of features used to build each tree, with 1.0, all features are included.

Value taken from ML.NET DLL

[0.5,1]

Seed

42

Sets the generation of random numbers to ensure reproducible results.

Value taken from ML.NET DLL

42

Lightgbm: Light Gradient Boosting Machine

Name

Value

Description

Source

Range

NumberOfLeaves

31

Sets the maximum number of leaves a tree can have, limiting how complex it can become.

Parameters — LightGBM 4.6.0.99 documentation

[31,512]

LearningRate

0.1f

Specifies the step size used to update model weights: smaller values mean slower but more precise learning.

Parameters — LightGBM 4.6.0.99 documentation

[0.01,0.3]

L2CategoricalRegularization

10

Applies a penalty to categorical features to help reduce overfitting.

Value taken from ML.NET DLL

[0,10]

NumberOfThreads

CPU cores

Determines the number of processor cores running simultaneously to accelerate training.

standard

[1,cpucore]

MaximumBinCountPerFeature

255

Sets the maximum number of bins used to bucket continuous features for splitting.

Value taken from ML.NET DLL

[32,512]

The tables above list the hyperparameters for each algorithm, including their default values, descriptions, sources, and recommended ranges. Hyperparameters highlighted in light blue come from standard online references, with links provided in the source column. All other values reflect the default settings used in the Microsoft ML.NET framework.

In This Topic
Related Links
TitleResults for “How to create a CRG?”Also Available in