The main parameters in XGBoost and their effects on model performance

The learning rate controls the step size at which the optimizer makes updates to the weights. A smaller eta value results in slower but more accurate updates, while a larger eta value results in faster but less accurate updates. It is common to start with a relatively high value and then gradually decrease it. For example, you can start with eta = 0.1 and decrease it by a factor of 0.1 every 10 rounds. However, setting a too small eta value can lead to slow convergence and a too high value can lead to underfitting.

Learn More

Tags: XGBoost