The goal of ensemble methods is to combine the predictions of several base estimators built with a given learning algorithm in order to improve generalizability / robustness over a single estimator. Show
Two families of ensemble methods are usually distinguished:
1.11.1. Bagging meta-estimator¶In ensemble algorithms, bagging methods form a class of algorithms which build several instances of a black-box estimator on random subsets of the original training set and then aggregate their individual predictions to form a final prediction. These methods are used as a way to reduce the variance of a base estimator (e.g., a decision tree), by introducing randomization into its construction procedure and then making an ensemble out of it. In many cases, bagging methods constitute a very simple way to improve with respect to a single model, without making it necessary to adapt the underlying base algorithm. As they provide a way to reduce overfitting, bagging methods work best with strong and complex models (e.g., fully developed decision trees), in contrast with boosting methods which usually work best with weak models (e.g., shallow decision trees). Bagging methods come in many flavours but mostly differ from each other by the way they draw random subsets of the training set:
In scikit-learn, bagging methods are offered as a unified
>>> from sklearn.ensemble import BaggingClassifier >>> from sklearn.neighbors import KNeighborsClassifier >>> bagging = BaggingClassifier(KNeighborsClassifier(), ... max_samples=0.5, max_features=0.5) 1.11.2. Forests of randomized trees¶The
As other classifiers, forest classifiers have to be fitted with two arrays: a sparse or dense array X of shape >>> from sklearn.ensemble import RandomForestClassifier >>> X = [[0, 0], [1, 1]] >>> Y = [0, 1] >>> clf = RandomForestClassifier(n_estimators=10) >>> clf = clf.fit(X, Y) Like decision trees, forests of trees also extend to multi-output problems (if Y is an array of shape 1.11.2.1. Random Forests¶In random forests (see Furthermore, when splitting each node during the construction of a tree, the best split is found either from all input features or a random subset
of size The purpose of these two sources of randomness is to decrease the variance of the forest estimator. Indeed, individual decision trees typically exhibit high variance and tend to overfit. The injected randomness in forests yield decision trees with somewhat decoupled prediction errors. By taking an average of those predictions, some errors can cancel out. Random forests achieve a reduced variance by combining diverse trees, sometimes at the cost of a slight increase in bias. In practice the variance reduction is often significant hence yielding an overall better model. In contrast to the original publication [B2001], the scikit-learn implementation combines classifiers by averaging their probabilistic prediction, instead of letting each classifier vote for a single class. 1.11.2.2. Extremely Randomized Trees¶In extremely randomized trees (see
>>> from sklearn.model_selection import cross_val_score >>> from sklearn.datasets import make_blobs >>> from sklearn.ensemble import RandomForestClassifier >>> from sklearn.ensemble import ExtraTreesClassifier >>> from sklearn.tree import DecisionTreeClassifier >>> X, y = make_blobs(n_samples=10000, n_features=10, centers=100, ... random_state=0) >>> clf = DecisionTreeClassifier(max_depth=None, min_samples_split=2, ... random_state=0) >>> scores = cross_val_score(clf, X, y, cv=5) >>> scores.mean() 0.98... >>> clf = RandomForestClassifier(n_estimators=10, max_depth=None, ... min_samples_split=2, random_state=0) >>> scores = cross_val_score(clf, X, y, cv=5) >>> scores.mean() 0.999... >>> clf = ExtraTreesClassifier(n_estimators=10, max_depth=None, ... min_samples_split=2, random_state=0) >>> scores = cross_val_score(clf, X, y, cv=5) >>> scores.mean() > 0.999 True 1.11.2.3. Parameters¶The
main parameters to adjust when using these methods is Note The size of the model with the default parameters is \(O( M * N * log (N) )\), where \(M\) is the number of trees and \(N\) is the number of samples. In order to reduce the size of the model, you can change these parameters: 1.11.2.4. Parallelization¶Finally, this module also features the parallel construction of the trees and the parallel computation of the predictions through the 1.11.2.5. Feature importance evaluation¶The relative rank (i.e. depth) of a feature used as a decision node in a tree can be used to assess the relative importance of that feature with respect to the predictability of the target variable. Features used at the top of the tree contribute to the final prediction decision of a larger fraction of the input samples. The expected fraction of the samples they contribute to can thus be used as an estimate of the relative importance of the features. In scikit-learn, the fraction of samples a feature contributes to is combined with the decrease in impurity from splitting them to create a normalized estimate of the predictive power of that feature. By averaging the estimates of predictive ability over several randomized trees one can reduce the variance of such an estimate and use it for feature selection. This is known as the mean decrease in impurity, or MDI. Refer to [L2014] for more information on MDI and feature importance evaluation with Random Forests. Warning The impurity-based feature importances computed on tree-based models suffer from two flaws that can lead to misleading conclusions. First they are computed on statistics derived from the training dataset and therefore do not necessarily inform us on which features are most important to make good predictions on held-out dataset. Secondly, they favor high cardinality features, that is features with many unique values. Permutation feature importance is an alternative to impurity-based feature importance that does not suffer from these flaws. These two methods of obtaining feature importance are explored in: Permutation Importance vs Random Forest Feature Importance (MDI). The following example shows a color-coded representation of the relative importances of
each individual pixel for a face recognition task using a In practice those estimates are stored as an attribute named 1.11.2.6. Totally Random Trees Embedding¶
As neighboring data points are more likely to lie within the same leaf of a tree, the transformation performs an implicit, non-parametric density estimation. See also Manifold learning techniques can also be useful to derive non-linear representations of feature space, also these approaches focus also on dimensionality reduction. 1.11.3. AdaBoost¶The module The core principle of AdaBoost is to fit a sequence of weak learners (i.e., models that are only slightly better than random guessing, such as small decision trees) on repeatedly modified versions of the data. The predictions from all of them are then combined through a weighted majority vote (or sum) to produce the final prediction. The data modifications at each so-called boosting iteration consist of applying weights \(w_1\), \(w_2\), …, \(w_N\) to each of the training samples. Initially, those weights are all set to \(w_i = 1/N\), so that the first step simply trains a weak learner on the original data. For each successive iteration, the sample weights are individually modified and the learning algorithm is reapplied to the reweighted data. At a given step, those training examples that were incorrectly predicted by the boosted model induced at the previous step have their weights increased, whereas the weights are decreased for those that were predicted correctly. As iterations proceed, examples that are difficult to predict receive ever-increasing influence. Each subsequent weak learner is thereby forced to concentrate on the examples that are missed by the previous ones in the sequence [HTF]. AdaBoost can be used both for classification and regression problems:
1.11.3.1. Usage¶The following example shows how to fit an AdaBoost classifier with 100 weak learners: >>> from sklearn.model_selection import cross_val_score >>> from sklearn.datasets import load_iris >>> from sklearn.ensemble import AdaBoostClassifier >>> X, y = load_iris(return_X_y=True) >>> clf = AdaBoostClassifier(n_estimators=100) >>> scores = cross_val_score(clf, X, y, cv=5) >>> scores.mean() 0.9... The number of weak learners is controlled by the parameter 1.11.4. Gradient Tree Boosting¶Gradient Tree Boosting or Gradient Boosted Decision Trees (GBDT) is a generalization of boosting to arbitrary differentiable loss functions, see the seminal work of [Friedman2001]. GBDT is an accurate and effective off-the-shelf procedure that can be used for both regression and classification problems in a variety of areas including Web search ranking and ecology. The module The usage and the parameters of 1.11.4.1. Classification¶
>>> from sklearn.datasets import make_hastie_10_2 >>> from sklearn.ensemble import GradientBoostingClassifier >>> X, y = make_hastie_10_2(random_state=0) >>> X_train, X_test = X[:2000], X[2000:] >>> y_train, y_test = y[:2000], y[2000:] >>> clf = GradientBoostingClassifier(n_estimators=100, learning_rate=1.0, ... max_depth=1, random_state=0).fit(X_train, y_train) >>> clf.score(X_test, y_test) 0.913... The number of weak learners (i.e. regression trees) is controlled by the parameter Note Classification with more than 2 classes requires the induction of 1.11.4.2. Regression¶
>>> import numpy as np >>> from sklearn.metrics import mean_squared_error >>> from sklearn.datasets import make_friedman1 >>> from sklearn.ensemble import GradientBoostingRegressor >>> X, y = make_friedman1(n_samples=1200, random_state=0, noise=1.0) >>> X_train, X_test = X[:200], X[200:] >>> y_train, y_test = y[:200], y[200:] >>> est = GradientBoostingRegressor( ... n_estimators=100, learning_rate=0.1, max_depth=1, random_state=0, ... loss='squared_error' ... ).fit(X_train, y_train) >>> mean_squared_error(y_test, est.predict(X_test)) 5.00... The figure below shows the results of applying
1.11.4.3. Fitting additional weak-learners¶Both
>>> _ = est.set_params(n_estimators=200, warm_start=True) # set warm_start and new nr of trees >>> _ = est.fit(X_train, y_train) # fit additional 100 trees to est >>> mean_squared_error(y_test, est.predict(X_test)) 3.84... 1.11.4.4. Controlling the tree size¶The size of the regression tree base learners defines the level of variable interactions that can be captured by the gradient
boosting model. In general, a tree of depth If you specify Alternatively, you can control the tree size by specifying the number of leaf nodes via the parameter We found that 1.11.4.5. Mathematical formulation¶We first present GBRT for regression, and then detail the classification case. 1.11.4.5.1. Regression¶GBRT regressors are additive models whose prediction \(\hat{y}_i\) for a given input \(x_i\) is of the following form:
where the
\(h_m\) are estimators called weak learners in the context of boosting. Gradient Tree Boosting uses decision tree regressors of fixed size as weak learners. The constant M corresponds to the Similar to other boosting algorithms, a GBRT is built in a greedy fashion:
where the newly added tree \(h_m\) is fitted in order to minimize a sum of losses \(L_m\), given the previous ensemble \(F_{m-1}\):
where \(l(y_i, F(x_i))\) is defined by the By default, the initial model \(F_{0}\) is chosen as the constant that
minimizes the loss: for a least-squares loss, this is the empirical mean of the target values. The initial model can also be specified via the Using a first-order Taylor approximation, the value of \(l\) can be approximated as follows:
Note Briefly, a first-order Taylor approximation says that \(l(z) \approx l(a) + (z - a) \frac{\partial l(a)}{\partial a}\). Here, \(z\) corresponds to \(F_{m - 1}(x_i) + h_m(x_i)\), and \(a\) corresponds to \(F_{m-1}(x_i)\) The quantity \(\left[ \frac{\partial l(y_i, F(x_i))}{\partial F(x_i)} \right]_{F=F_{m - 1}}\) is the derivative of the loss with respect to its second parameter, evaluated at \(F_{m-1}(x)\). It is easy to compute for any given \(F_{m - 1}(x_i)\) in a closed form since the loss is differentiable. We will denote it by \(g_i\). Removing the constant terms, we have:
This is minimized if \(h(x_i)\) is fitted to predict a value that is proportional to the negative gradient \(-g_i\). Therefore, at each iteration, the estimator \(h_m\) is fitted to predict the negative gradients of the samples. The gradients are updated at each iteration. This can be considered as some kind of gradient descent in a functional space. Note For some losses, e.g. the least absolute deviation (LAD) where the gradients are \(\pm 1\), the values predicted by a fitted \(h_m\) are not accurate enough: the tree can only output integer values. As a result, the leaves values of the tree \(h_m\) are modified once the tree is fitted, such that the leaves values minimize the loss \(L_m\). The update is loss-dependent: for the LAD loss, the value of a leaf is updated to the median of the samples in that leaf. 1.11.4.5.2. Classification¶Gradient boosting for classification is very similar to the regression case. However, the sum of the trees \(F_M(x_i) = \sum_m h_m(x_i)\) is not homogeneous to a prediction: it cannot be a class, since the trees predict continuous values. The mapping from the value \(F_M(x_i)\) to a class or a probability is loss-dependent. For the log-loss, the probability that \(x_i\) belongs to the positive class is modeled as \(p(y_i = 1 | x_i) = \sigma(F_M(x_i))\) where \(\sigma\) is the sigmoid or expit function. For multiclass classification, K trees (for K classes) are built at each of the \(M\) iterations. The probability that \(x_i\) belongs to class k is modeled as a softmax of the \(F_{M,k}(x_i)\) values. Note that even for a classification task, the \(h_m\) sub-estimator is still a regressor, not a classifier. This is because the sub-estimators are trained to predict (negative) gradients, which are always continuous quantities. 1.11.4.6. Loss Functions¶The following loss functions are supported and can be specified using the parameter
1.11.4.7. Shrinkage via learning rate¶[Friedman2001] proposed a simple regularization strategy that scales the contribution of each weak learner by a constant factor \(\nu\): \[F_m(x) = F_{m-1}(x) + \nu h_m(x)\] The parameter \(\nu\)
is also called the learning rate because it scales the step length the gradient descent procedure; it can be set via the The parameter 1.11.4.8. Subsampling¶[Friedman2002] proposed stochastic gradient boosting, which combines gradient boosting with bootstrap averaging (bagging). At each iteration the base classifier is trained on a fraction The figure below illustrates the effect of shrinkage and subsampling on the goodness-of-fit of the model. We can clearly see that shrinkage outperforms no-shrinkage. Subsampling with shrinkage can further increase the accuracy of the model. Subsampling without shrinkage, on the other hand, does poorly. Another strategy to reduce the variance is by subsampling the features analogous to the random splits in
Note Using a small Stochastic gradient boosting allows to compute out-of-bag estimates of the test deviance by
computing the improvement in deviance on the examples that are not included in the bootstrap sample (i.e. the out-of-bag examples). The improvements are stored in the attribute 1.11.4.9. Interpretation with feature importance¶Individual decision trees can be interpreted easily by simply visualizing the tree structure. Gradient boosting models, however, comprise hundreds of regression trees thus they cannot be easily interpreted by visual inspection of the individual trees. Fortunately, a number of techniques have been proposed to summarize and interpret gradient boosting models. Often features do not contribute equally to predict the target response; in many situations the majority of the features are in fact irrelevant. When interpreting a model, the first question usually is: what are those important features and how do they contributing in predicting the target response? Individual decision trees intrinsically perform feature selection by selecting appropriate split points. This information can be used to measure the importance of each feature; the basic idea is: the more often a feature is used in the split points of a tree the more important that feature is. This notion of importance can be extended to decision tree ensembles by simply averaging the impurity-based feature importance of each tree (see Feature importance evaluation for more details). The feature importance scores of a fit gradient boosting model can be accessed via the >>> from sklearn.datasets import make_hastie_10_2 >>> from sklearn.ensemble import GradientBoostingClassifier >>> X, y = make_hastie_10_2(random_state=0) >>> clf = GradientBoostingClassifier(n_estimators=100, learning_rate=1.0, ... max_depth=1, random_state=0).fit(X, y) >>> clf.feature_importances_ array([0.10..., 0.10..., 0.11..., ... Note that this computation of feature importance is based on entropy, and it is distinct from
1.11.5. Histogram-Based Gradient Boosting¶Scikit-learn 0.21 introduced two new implementations of gradient boosting trees, namely These histogram-based estimators
can be orders of magnitude faster than They also have built-in support for missing values, which avoids the need for an imputer. These fast estimators first bin the input samples 1.11.5.1. Usage¶Most of the parameters are unchanged from >>> from sklearn.ensemble import HistGradientBoostingClassifier >>> from sklearn.datasets import make_hastie_10_2 >>> X, y = make_hastie_10_2(random_state=0) >>> X_train, X_test = X[:2000], X[2000:] >>> y_train, y_test = y[:2000], y[2000:] >>> clf = HistGradientBoostingClassifier(max_iter=100).fit(X_train, y_train) >>> clf.score(X_test, y_test) 0.8965 Available losses for regression are ‘squared_error’, ‘absolute_error’, which is less sensitive to outliers, and
‘poisson’, which is well suited to model counts and frequencies. For classification, ‘log_loss’ is the only option. For binary classification it uses the binary log loss, also kown as binomial deviance or binary cross-entropy. For The size of the trees can be controlled through the The number of bins used to bin the data is controlled with the The Note that early-stopping is enabled by default if the number of samples is larger than 10,000. The early-stopping behaviour is controlled via the 1.11.5.2. Missing values support¶
During training, the tree grower learns at each split point whether samples with missing values should go to the left or right child, based on the potential gain. When predicting, samples with missing values are assigned to the left or right child consequently: >>> from sklearn.ensemble import HistGradientBoostingClassifier >>> import numpy as np >>> X = np.array([0, 1, 2, np.nan]).reshape(-1, 1) >>> y = [0, 0, 1, 1] >>> gbdt = HistGradientBoostingClassifier(min_samples_leaf=1).fit(X, y) >>> gbdt.predict(X) array([0, 0, 1, 1]) When the missingness pattern is predictive, the splits can be done on whether the feature value is missing or not: >>> X = np.array([0, np.nan, 1, 2, np.nan]).reshape(-1, 1) >>> y = [0, 1, 0, 0, 1] >>> gbdt = HistGradientBoostingClassifier(min_samples_leaf=1, ... max_depth=2, ... learning_rate=1, ... max_iter=1).fit(X, y) >>> gbdt.predict(X) array([0, 1, 0, 0, 1]) If no missing values were encountered for a given feature during training, then samples with missing values are mapped to whichever child has the most samples. 1.11.5.3. Sample weight support¶
The following toy example demonstrates how the model ignores the samples with zero sample weights: >>> X = [[1, 0], ... [1, 0], ... [1, 0], ... [0, 1]] >>> y = [0, 0, 1, 0] >>> # ignore the first 2 training samples by setting their weight to 0 >>> sample_weight = [0, 0, 1, 1] >>> gb = HistGradientBoostingClassifier(min_samples_leaf=1) >>> gb.fit(X, y, sample_weight=sample_weight) HistGradientBoostingClassifier(...) >>> gb.predict([[1, 0]]) array([1]) >>> gb.predict_proba([[1, 0]])[0, 1] 0.99... As
you can see, the Implementation detail: taking sample weights into account amounts to multiplying the gradients (and the hessians) by the sample weights. Note that the binning stage (specifically the quantiles computation) does not take the weights into account. 1.11.5.4. Categorical Features Support¶
For datasets with categorical features, using the native categorical support is often better than relying on one-hot encoding
( To enable categorical support, a boolean mask can be passed to the >>> gbdt = HistGradientBoostingClassifier(categorical_features=[True, False]) Equivalently, one can pass a list of integers indicating the indices of the categorical features: >>> gbdt = HistGradientBoostingClassifier(categorical_features=[0]) The cardinality of each categorical feature should be less than the
If there are missing values during training, the missing values will be treated as a proper category. If there are no missing values during training, then at prediction time, missing values are mapped to the child node that has the most samples (just like for continuous features). When predicting, categories that were not seen during fit time will be treated as missing values. Split finding with categorical features: The canonical way of considering categorical splits in a tree is to consider all of the \(2^{K - 1} - 1\) partitions, where \(K\) is the number of categories. This can quickly become prohibitive when \(K\) is large. Fortunately, since gradient boosting
trees are always regression trees (even for classification problems), there exist a faster strategy that can yield equivalent splits. First, the categories of a feature are sorted according to the variance of the target, for each category 1.11.5.5. Monotonic Constraints¶Depending on the problem at hand, you may have prior knowledge indicating that a given feature should in general have a positive (or negative) effect on the target value. For example, all else being equal, a higher credit score should increase the probability of getting approved for a loan. Monotonic constraints allow you to incorporate such prior knowledge into the model. A positive monotonic constraint is a constraint of the form: \(x_1 \leq x_1' \implies F(x_1, x_2) \leq F(x_1', x_2)\), where \(F\) is the predictor with two features. Similarly, a negative monotonic constraint is of the form: \(x_1 \leq x_1' \implies F(x_1, x_2) \geq F(x_1', x_2)\). Note that monotonic constraints only constraint the output “all else being equal”. Indeed, the following relation is not enforced by a positive constraint: \(x_1 \leq x_1' \implies F(x_1, x_2) \leq F(x_1', x_2')\). You can specify a monotonic constraint on each feature using the >>> from sklearn.ensemble import HistGradientBoostingRegressor ... # positive, negative, and no constraint on the 3 features >>> gbdt = HistGradientBoostingRegressor(monotonic_cst=[1, -1, 0]) In a binary classification context, imposing a monotonic constraint means that the feature is supposed to have a positive / negative effect on the probability to belong to the positive class. Monotonic constraints are not supported for multiclass context. Note Since categories are unordered quantities, it is not possible to enforce monotonic constraints on categorical features. 1.11.5.6. Low-level parallelism¶
The following parts are parallelized:
1.11.5.7. Why it’s faster¶The bottleneck of a gradient boosting procedure is building the decision trees.
Building a traditional decision tree (as in the other GBDTs
In order to build histograms, the input data Finally, many parts of
the implementation of 1.11.6. Voting Classifier¶The idea behind the 1.11.6.1. Majority Class Labels (Majority/Hard Voting)¶In majority voting, the predicted class label for a particular sample is the class label that represents the majority (mode) of the class labels predicted by each individual classifier. E.g., if the prediction for a given sample is
the VotingClassifier (with In the cases of a tie, the
the class label 1 will be assigned to the sample. 1.11.6.2. Usage¶The following example shows how to fit the majority rule classifier: >>> from sklearn import datasets >>> from sklearn.model_selection import cross_val_score >>> from sklearn.linear_model import LogisticRegression >>> from sklearn.naive_bayes import GaussianNB >>> from sklearn.ensemble import RandomForestClassifier >>> from sklearn.ensemble import VotingClassifier >>> iris = datasets.load_iris() >>> X, y = iris.data[:, 1:3], iris.target >>> clf1 = LogisticRegression(random_state=1) >>> clf2 = RandomForestClassifier(n_estimators=50, random_state=1) >>> clf3 = GaussianNB() >>> eclf = VotingClassifier( ... estimators=[('lr', clf1), ('rf', clf2), ('gnb', clf3)], ... voting='hard') >>> for clf, label in zip([clf1, clf2, clf3, eclf], ['Logistic Regression', 'Random Forest', 'naive Bayes', 'Ensemble']): ... scores = cross_val_score(clf, X, y, scoring='accuracy', cv=5) ... print("Accuracy: %0.2f (+/- %0.2f) [%s]" % (scores.mean(), scores.std(), label)) Accuracy: 0.95 (+/- 0.04) [Logistic Regression] Accuracy: 0.94 (+/- 0.04) [Random Forest] Accuracy: 0.91 (+/- 0.04) [naive Bayes] Accuracy: 0.95 (+/- 0.04) [Ensemble] 1.11.6.3. Weighted Average Probabilities (Soft Voting)¶In contrast to majority voting (hard voting), soft voting returns the class label as argmax of the sum of predicted probabilities. Specific weights can be assigned to each classifier via the To illustrate this with a simple example, let’s assume we have 3 classifiers and a 3-class classification problems where we assign equal weights to all classifiers: w1=1, w2=1, w3=1. The weighted average probabilities for a sample would then be calculated as follows:
Here, the predicted class label is 2, since it has the highest average probability. The following example illustrates how the decision regions may change when a soft >>> from sklearn import datasets >>> from sklearn.tree import DecisionTreeClassifier >>> from sklearn.neighbors import KNeighborsClassifier >>> from sklearn.svm import SVC >>> from itertools import product >>> from sklearn.ensemble import VotingClassifier >>> # Loading some example data >>> iris = datasets.load_iris() >>> X = iris.data[:, [0, 2]] >>> y = iris.target >>> # Training classifiers >>> clf1 = DecisionTreeClassifier(max_depth=4) >>> clf2 = KNeighborsClassifier(n_neighbors=7) >>> clf3 = SVC(kernel='rbf', probability=True) >>> eclf = VotingClassifier(estimators=[('dt', clf1), ('knn', clf2), ('svc', clf3)], ... voting='soft', weights=[2, 1, 2]) >>> clf1 = clf1.fit(X, y) >>> clf2 = clf2.fit(X, y) >>> clf3 = clf3.fit(X, y) >>> eclf = eclf.fit(X, y) 1.11.6.4. Using the VotingClassifier with GridSearchCV¶The >>> from sklearn.model_selection import GridSearchCV >>> clf1 = LogisticRegression(random_state=1) >>> clf2 = RandomForestClassifier(random_state=1) >>> clf3 = GaussianNB() >>> eclf = VotingClassifier( ... estimators=[('lr', clf1), ('rf', clf2), ('gnb', clf3)], ... voting='soft' ... ) >>> params = {'lr__C': [1.0, 100.0], 'rf__n_estimators': [20, 200]} >>> grid = GridSearchCV(estimator=eclf, param_grid=params, cv=5) >>> grid = grid.fit(iris.data, iris.target) 1.11.6.5. Usage¶In order to
predict the class labels based on the predicted class-probabilities (scikit-learn estimators in the VotingClassifier must support >>> eclf = VotingClassifier( ... estimators=[('lr', clf1), ('rf', clf2), ('gnb', clf3)], ... voting='soft' ... ) Optionally, weights can be provided for the individual classifiers: >>> eclf = VotingClassifier( ... estimators=[('lr', clf1), ('rf', clf2), ('gnb', clf3)], ... voting='soft', weights=[2,5,1] ... ) 1.11.7. Voting Regressor¶The idea behind the 1.11.7.1. Usage¶The following example shows how to fit the VotingRegressor: >>> from sklearn.datasets import load_diabetes >>> from sklearn.ensemble import GradientBoostingRegressor >>> from sklearn.ensemble import RandomForestRegressor >>> from sklearn.linear_model import LinearRegression >>> from sklearn.ensemble import VotingRegressor >>> # Loading some example data >>> X, y = load_diabetes(return_X_y=True) >>> # Training classifiers >>> reg1 = GradientBoostingRegressor(random_state=1) >>> reg2 = RandomForestRegressor(random_state=1) >>> reg3 = LinearRegression() >>> ereg = VotingRegressor(estimators=[('gb', reg1), ('rf', reg2), ('lr', reg3)]) >>> ereg = ereg.fit(X, y) 1.11.8. Stacked generalization¶Stacked generalization is a method for combining estimators to reduce their biases [W1992] [HTF]. More precisely, the predictions of each individual estimator are stacked together and used as input to a final estimator to compute the prediction. This final estimator is trained through cross-validation. The The >>> from sklearn.linear_model import RidgeCV, LassoCV >>> from sklearn.neighbors import KNeighborsRegressor >>> estimators = [('ridge', RidgeCV()), ... ('lasso', LassoCV(random_state=42)), ... ('knr', KNeighborsRegressor(n_neighbors=20, ... metric='euclidean'))] The
>>> from sklearn.ensemble import GradientBoostingRegressor >>> from sklearn.ensemble import StackingRegressor >>> final_estimator = GradientBoostingRegressor( ... n_estimators=25, subsample=0.5, min_samples_leaf=25, max_features=1, ... random_state=42) >>> reg = StackingRegressor( ... estimators=estimators, ... final_estimator=final_estimator) To train the >>> from sklearn.datasets import load_diabetes >>> X, y = load_diabetes(return_X_y=True) >>> from sklearn.model_selection import train_test_split >>> X_train, X_test, y_train, y_test = train_test_split(X, y, ... random_state=42) >>> reg.fit(X_train, y_train) StackingRegressor(...) During training, the For
A >>> y_pred = reg.predict(X_test) >>> from sklearn.metrics import r2_score >>> print('R2 score: {:.2f}'.format(r2_score(y_test, y_pred))) R2 score: 0.53 Note that it is also possible to get the output of the stacked >>> reg.transform(X_test[:5]) array([[142..., 138..., 146...], [179..., 182..., 151...], [139..., 132..., 158...], [286..., 292..., 225...], [126..., 124..., 164...]]) In practice, a stacking predictor predicts as good as the best predictor of the base layer and even sometimes outperforms it by combining the different strengths of the these predictors. However, training a stacking predictor is computationally expensive. Note For Note Multiple stacking layers can be achieved by assigning >>> final_layer_rfr = RandomForestRegressor( ... n_estimators=10, max_features=1, max_leaf_nodes=5,random_state=42) >>> final_layer_gbr = GradientBoostingRegressor( ... n_estimators=10, max_features=1, max_leaf_nodes=5,random_state=42) >>> final_layer = StackingRegressor( ... estimators=[('rf', final_layer_rfr), ... ('gbrt', final_layer_gbr)], ... final_estimator=RidgeCV() ... ) >>> multi_layer_regressor = StackingRegressor( ... estimators=[('ridge', RidgeCV()), ... ('lasso', LassoCV(random_state=42)), ... ('knr', KNeighborsRegressor(n_neighbors=20, ... metric='euclidean'))], ... final_estimator=final_layer ... ) >>> multi_layer_regressor.fit(X_train, y_train) StackingRegressor(...) >>> print('R2 score: {:.2f}' ... .format(multi_layer_regressor.score(X_test, y_test))) R2 score: 0.53 |