How does gradient boosting work?

How does gradient boosting work?

Gradient boosting is a type of machine learning boosting. It relies on the intuition that the best possible next model, when combined with previous models, minimizes the overall prediction error. Moreover, the key idea is to set the target outcomes for this next model in order to minimize the error.

Our go-to technique when solving general classification or regression problems is Gradient Boosting.

Furthermore, this post explains in a few sentences the two main strategies used in Gradient Boosting Decision Trees: level-wise and leaf-wise.

Level-wise strategy:

– Grow the tree level by level. 

– The data is split prioritizing the nodes closer to the tree root.

– Better for smaller datasets.

 Leaf-wise strategy:

– Grow the tree asymmetrically.

– Moreover, the data is split in the nodes with the highest loss change. 

– Better to larger datasets.

LightGBM uses the leaf-wise strategy by default. However, XGBoost uses the level-wise strategy by default and has an option to implement the leaf-wise strategy.

Why do we use gradient boosting?

In conclusion, Gradient Boosting Algorithm is generally becomes useful when we want to decrease the Bias error. In addition, Gradient Boosting Algorithm finds itself useful in regression as well as classification problems. Lastly, in regression problems, the cost function is MSE whereas, in classification problems, the cost function is Log-Loss.

Back To News

Paul Romer, Stanford Professor & Nobel Prize Winner

How does gradient boosting work?