Learning the parameters of a prediction function and testing it on the same data is a methodological mistake: a model that would just repeat the labels of the samples that it has just seen would have a perfect score but would fail to predict anything useful on yet-unseen data. This situation is called overfitting. To avoid it, it is common practice when performing
a
(supervised) machine learning experiment to hold out part of the available data as a test set Show
In scikit-learn a random split into training and test sets can be quickly computed with the
>>> import numpy as np >>> from sklearn.model_selection import train_test_split >>> from sklearn import datasets >>> from sklearn import svm >>> X, y = datasets.load_iris(return_X_y=True) >>> X.shape, y.shape ((150, 4), (150,)) We can now quickly sample a training set while holding out 40% of the data for testing (evaluating) our classifier: >>> X_train, X_test, y_train, y_test = train_test_split( ... X, y, test_size=0.4, random_state=0) >>> X_train.shape, y_train.shape ((90, 4), (90,)) >>> X_test.shape, y_test.shape ((60, 4), (60,)) >>> clf = svm.SVC(kernel='linear', C=1).fit(X_train, y_train) >>> clf.score(X_test, y_test) 0.96... When evaluating
different settings (“hyperparameters”) for estimators, such as the However, by partitioning the available data into three sets, we drastically reduce the number of samples which can be used for learning the model, and the results can depend on a particular random choice for the pair of (train, validation) sets. A solution to this problem is a procedure called cross-validation (CV for short). A test set should still be held out for final evaluation, but the validation set is no longer needed when doing CV. In the basic approach, called k-fold CV, the training set is split into k smaller sets (other approaches are described below, but generally follow the same principles). The following procedure is followed for each of the k “folds”:
The performance measure reported by k-fold cross-validation is then the average of the values computed in the loop. This approach can be computationally expensive, but does not waste too much data (as is the case when fixing an arbitrary validation set), which is a major advantage in problems such as inverse inference where the number of samples is very small. 3.1.1. Computing cross-validated metrics¶The simplest way to use cross-validation is to call the The following example demonstrates how to estimate the accuracy of a linear kernel support vector machine on the iris dataset by splitting the data, fitting a model and computing the score 5 consecutive times (with different splits each time): >>> from sklearn.model_selection import cross_val_score >>> clf = svm.SVC(kernel='linear', C=1, random_state=42) >>> scores = cross_val_score(clf, X, y, cv=5) >>> scores array([0.96..., 1. , 0.96..., 0.96..., 1. ]) The mean score and the standard deviation are hence given by: >>> print("%0.2f accuracy with a standard deviation of %0.2f" % (scores.mean(), scores.std())) 0.98 accuracy with a standard deviation of 0.02 By default, the score computed at each CV iteration is the >>> from sklearn import metrics >>> scores = cross_val_score( ... clf, X, y, cv=5, scoring='f1_macro') >>> scores array([0.96..., 1. ..., 0.96..., 0.96..., 1. ]) See The scoring parameter: defining model evaluation rules for details. In the case of the Iris dataset, the samples are balanced across target classes hence the accuracy and the F1-score are almost equal. When the It is also possible to use other cross validation strategies by passing a cross validation iterator instead, for instance: >>> from sklearn.model_selection import ShuffleSplit >>> n_samples = X.shape[0] >>> cv = ShuffleSplit(n_splits=5, test_size=0.3, random_state=0) >>> cross_val_score(clf, X, y, cv=cv) array([0.977..., 0.977..., 1. ..., 0.955..., 1. ]) Another option is to use an iterable yielding (train, test) splits as arrays of indices, for example: >>> def custom_cv_2folds(X): ... n = X.shape[0] ... i = 1 ... while i <= 2: ... idx = np.arange(n * (i - 1) / 2, n * i / 2, dtype=int) ... yield idx, idx ... i += 1 ... >>> custom_cv = custom_cv_2folds(X) >>> cross_val_score(clf, X, y, cv=custom_cv) array([1. , 0.973...]) 3.1.1.1. The cross_validate function and multiple metric evaluation¶The
For single metric
evaluation, where the scoring parameter is a string, callable or None, the keys will be - And for multiple metric evaluation, the return value is a dict with the following keys -
You may also retain the estimator fitted on each training set by setting The multiple metrics can be specified either as a list, tuple or set of predefined scorer names: >>> from sklearn.model_selection import cross_validate >>> from sklearn.metrics import recall_score >>> scoring = ['precision_macro', 'recall_macro'] >>> clf = svm.SVC(kernel='linear', C=1, random_state=0) >>> scores = cross_validate(clf, X, y, scoring=scoring) >>> sorted(scores.keys()) ['fit_time', 'score_time', 'test_precision_macro', 'test_recall_macro'] >>> scores['test_recall_macro'] array([0.96..., 1. ..., 0.96..., 0.96..., 1. ]) Or as a dict mapping scorer name to a predefined or custom scoring function: >>> from sklearn.metrics import make_scorer >>> scoring = {'prec_macro': 'precision_macro', ... 'rec_macro': make_scorer(recall_score, average='macro')} >>> scores = cross_validate(clf, X, y, scoring=scoring, ... cv=5, return_train_score=True) >>> sorted(scores.keys()) ['fit_time', 'score_time', 'test_prec_macro', 'test_rec_macro', 'train_prec_macro', 'train_rec_macro'] >>> scores['train_rec_macro'] array([0.97..., 0.97..., 0.99..., 0.98..., 0.98...]) Here is an example of >>> scores = cross_validate(clf, X, y, ... scoring='precision_macro', cv=5, ... return_estimator=True) >>> sorted(scores.keys()) ['estimator', 'fit_time', 'score_time', 'test_score'] 3.1.1.2. Obtaining predictions by cross-validation¶The function Warning Note on inappropriate usage of cross_val_predict The result of cross_val_predict is appropriate for:
The available cross validation iterators are introduced in the following section. 3.1.2. Cross validation iterators¶The following sections list utilities to generate indices that can be used to generate dataset splits according to different cross validation strategies. 3.1.2.1. Cross-validation iterators for i.i.d. data¶Assuming that some data is Independent and Identically Distributed (i.i.d.) is making the assumption that all samples stem from the same generative process and that the generative process is assumed to have no memory of past generated samples. The following cross-validators can be used in such cases. Note While i.i.d. data is a common assumption in machine learning theory, it rarely holds in practice. If one knows that the samples have been generated using a time-dependent process, it is safer to use a time-series aware cross-validation scheme. Similarly, if we know that the generative process has a group structure (samples collected from different subjects, experiments, measurement devices), it is safer to use group-wise cross-validation. 3.1.2.1.1. K-fold¶
Example of 2-fold cross-validation on a dataset with 4 samples: >>> import numpy as np >>> from sklearn.model_selection import KFold >>> X = ["a", "b", "c", "d"] >>> kf = KFold(n_splits=2) >>> for train, test in kf.split(X): ... print("%s %s" % (train, test)) [2 3] [0 1] [0 1] [2 3] Here is a visualization of the cross-validation behavior. Note that
Each fold is constituted by two arrays: the first one is related to the training set, and the second one to the test set. Thus, one can create the training/test sets using numpy indexing: >>> X = np.array([[0., 0.], [1., 1.], [-1., -1.], [2., 2.]]) >>> y = np.array([0, 1, 0, 1]) >>> X_train, X_test, y_train, y_test = X[train], X[test], y[train], y[test] 3.1.2.1.2. Repeated K-Fold¶
Example of 2-fold K-Fold repeated 2 times: >>> import numpy as np >>> from sklearn.model_selection import RepeatedKFold >>> X = np.array([[1, 2], [3, 4], [1, 2], [3, 4]]) >>> random_state = 12883823 >>> rkf = RepeatedKFold(n_splits=2, n_repeats=2, random_state=random_state) >>> for train, test in rkf.split(X): ... print("%s %s" % (train, test)) ... [2 3] [0 1] [0 1] [2 3] [0 2] [1 3] [1 3] [0 2] Similarly,
3.1.2.1.3. Leave One Out (LOO)¶
>>> from sklearn.model_selection import LeaveOneOut >>> X = [1, 2, 3, 4] >>> loo = LeaveOneOut() >>> for train, test in loo.split(X): ... print("%s %s" % (train, test)) [1 2 3] [0] [0 2 3] [1] [0 1 3] [2] [0 1 2] [3] Potential users of LOO for model selection should weigh a few known caveats. When compared with \(k\)-fold cross validation, one builds \(n\) models from \(n\) samples instead of \(k\) models, where \(n > k\). Moreover, each is trained on \(n - 1\) samples rather than \((k-1) n / k\). In both ways, assuming \(k\) is not too large and \(k < n\), LOO is more computationally expensive than \(k\)-fold cross validation. In terms of accuracy, LOO often results in high variance as an estimator for the test error. Intuitively, since \(n - 1\) of the \(n\) samples are used to build each model, models constructed from folds are virtually identical to each other and to the model built from the entire training set. However, if the learning curve is steep for the training size in question, then 5- or 10- fold cross validation can overestimate the generalization error. As a general rule, most authors, and empirical evidence, suggest that 5- or 10- fold cross validation should be preferred to LOO. 3.1.2.1.4. Leave P Out (LPO)¶
Example of Leave-2-Out on a dataset with 4 samples: >>> from sklearn.model_selection import LeavePOut >>> X = np.ones(4) >>> lpo = LeavePOut(p=2) >>> for train, test in lpo.split(X): ... print("%s %s" % (train, test)) [2 3] [0 1] [1 3] [0 2] [1 2] [0 3] [0 3] [1 2] [0 2] [1 3] [0 1] [2 3] 3.1.2.1.5. Random permutations cross-validation a.k.a. Shuffle & Split¶The
It is possible to control the randomness for reproducibility of the results by explicitly seeding the Here is a usage example: >>> from sklearn.model_selection import ShuffleSplit >>> X = np.arange(10) >>> ss = ShuffleSplit(n_splits=5, test_size=0.25, random_state=0) >>> for train_index, test_index in ss.split(X): ... print("%s %s" % (train_index, test_index)) [9 1 6 7 3 0 5] [2 8 4] [2 9 8 0 6 7 4] [3 5 1] [4 5 1 0 6 9 7] [2 3 8] [2 7 5 8 0 3 4] [6 1 9] [4 1 0 6 8 9 3] [5 2 7] Here is a visualization of the cross-validation behavior. Note that
3.1.2.2. Cross-validation iterators with stratification based on class labels.¶Some classification problems can exhibit a large imbalance in the distribution of the target classes: for instance there could be several times more negative samples than positive samples. In such cases it is recommended
to use stratified sampling as implemented in 3.1.2.2.1. Stratified k-fold¶
Here is an example of stratified 3-fold cross-validation on a dataset with 50 samples from two unbalanced classes. We show the number of samples in each class and compare with >>> from sklearn.model_selection import StratifiedKFold, KFold >>> import numpy as np >>> X, y = np.ones((50, 1)), np.hstack(([0] * 45, [1] * 5)) >>> skf = StratifiedKFold(n_splits=3) >>> for train, test in skf.split(X, y): ... print('train - {} | test - {}'.format( ... np.bincount(y[train]), np.bincount(y[test]))) train - [30 3] | test - [15 2] train - [30 3] | test - [15 2] train - [30 4] | test - [15 1] >>> kf = KFold(n_splits=3) >>> for train, test in kf.split(X, y): ... print('train - {} | test - {}'.format( ... np.bincount(y[train]), np.bincount(y[test]))) train - [28 5] | test - [17] train - [28 5] | test - [17] train - [34] | test - [11 5] We can see that
Here is a visualization of the cross-validation behavior.
3.1.2.2.2. Stratified Shuffle Split¶
Here is a visualization of the cross-validation behavior. 3.1.2.3. Cross-validation iterators for grouped data¶The i.i.d. assumption is broken if the underlying generative process yield groups of dependent samples. Such a grouping of data is domain specific. An example would be when there is medical data collected from multiple patients, with multiple samples taken from each patient. And such data is likely to be dependent on the individual group. In our example, the patient id for each sample will be its group identifier. In this case we would like to know if a model trained on a particular set of groups generalizes well to the unseen groups. To measure this, we need to ensure that all the samples in the validation fold come from groups that are not represented at all in the paired training fold. The following cross-validation splitters can be used to do that. The grouping identifier for the samples is specified via the 3.1.2.3.1. Group k-fold¶
Imagine you have three subjects, each with an associated number from 1 to 3: >>> from sklearn.model_selection import GroupKFold >>> X = [0.1, 0.2, 2.2, 2.4, 2.3, 4.55, 5.8, 8.8, 9, 10] >>> y = ["a", "b", "b", "b", "c", "c", "c", "d", "d", "d"] >>> groups = [1, 1, 1, 2, 2, 2, 3, 3, 3, 3] >>> gkf = GroupKFold(n_splits=3) >>> for train, test in gkf.split(X, y, groups=groups): ... print("%s %s" % (train, test)) [0 1 2 3 4 5] [6 7 8 9] [0 1 2 6 7 8 9] [3 4 5] [3 4 5 6 7 8 9] [0 1 2] Each subject is in a different testing fold, and the same subject is never in both
testing and training. Notice that the folds do not have exactly the same size due to the imbalance in the data. If class proportions must be balanced across folds, Here is a visualization of the cross-validation behavior. Similar to 3.1.2.3.2. StratifiedGroupKFold¶
Example: >>> from sklearn.model_selection import StratifiedGroupKFold >>> X = list(range(18)) >>> y = [1] * 6 + [0] * 12 >>> groups = [1, 2, 3, 3, 4, 4, 1, 1, 2, 2, 3, 4, 5, 5, 5, 6, 6, 6] >>> sgkf = StratifiedGroupKFold(n_splits=3) >>> for train, test in sgkf.split(X, y, groups=groups): ... print("%s %s" % (train, test)) [ 0 2 3 4 5 6 7 10 11 15 16 17] [ 1 8 9 12 13 14] [ 0 1 4 5 6 7 8 9 11 12 13 14] [ 2 3 10 15 16 17] [ 1 2 3 8 9 10 12 13 14 15 16 17] [ 0 4 5 6 7 11] Implementation notes:
Here is a visualization of cross-validation behavior for uneven groups: 3.1.2.3.3. Leave One Group Out¶
Each training set is
thus constituted by all the samples except the ones related to a specific group. This is basically the same as For example, in the cases of multiple experiments,
>>> from sklearn.model_selection import LeaveOneGroupOut >>> X = [1, 5, 10, 50, 60, 70, 80] >>> y = [0, 1, 1, 2, 2, 2, 2] >>> groups = [1, 1, 2, 2, 3, 3, 3] >>> logo = LeaveOneGroupOut() >>> for train, test in logo.split(X, y, groups=groups): ... print("%s %s" % (train, test)) [2 3 4 5 6] [0 1] [0 1 4 5 6] [2 3] [0 1 2 3] [4 5 6] Another common application is to use time information: for instance the groups could be the year of collection of the samples and thus allow for cross-validation against time-based splits. 3.1.2.3.4. Leave P Groups Out¶
Example of Leave-2-Group Out: >>> from sklearn.model_selection import LeavePGroupsOut >>> X = np.arange(6) >>> y = [1, 1, 1, 2, 2, 2] >>> groups = [1, 1, 2, 2, 3, 3] >>> lpgo = LeavePGroupsOut(n_groups=2) >>> for train, test in lpgo.split(X, y, groups=groups): ... print("%s %s" % (train, test)) [4 5] [0 1 2 3] [2 3] [0 1 4 5] [0 1] [2 3 4 5] 3.1.2.3.5. Group Shuffle Split¶The Here is a usage example: >>> from sklearn.model_selection import GroupShuffleSplit >>> X = [0.1, 0.2, 2.2, 2.4, 2.3, 4.55, 5.8, 0.001] >>> y = ["a", "b", "b", "b", "c", "c", "c", "a"] >>> groups = [1, 1, 2, 2, 3, 3, 4, 4] >>> gss = GroupShuffleSplit(n_splits=4, test_size=0.5, random_state=0) >>> for train, test in gss.split(X, y, groups=groups): ... print("%s %s" % (train, test)) ... [0 1 2 3] [4 5 6 7] [2 3 6 7] [0 1 4 5] [2 3 4 5] [0 1 6 7] [4 5 6 7] [0 1 2 3] Here is a visualization of the cross-validation behavior. This class is useful when the behavior of
3.1.2.4. Predefined Fold-Splits / Validation-Sets¶For some datasets, a pre-defined split of the data into training- and validation fold or into several cross-validation folds already exists. Using
For example, when using a validation set, set the 3.1.2.5. Using cross-validation iterators to split train and test¶The above group cross-validation functions may also be useful for splitting a dataset into training and testing subsets. Note that the convenience function
To perform the train and test split, use the indices for the train and test subsets yielded by the generator output by the >>> import numpy as np >>> from sklearn.model_selection import GroupShuffleSplit >>> X = np.array([0.1, 0.2, 2.2, 2.4, 2.3, 4.55, 5.8, 0.001]) >>> y = np.array(["a", "b", "b", "b", "c", "c", "c", "a"]) >>> groups = np.array([1, 1, 2, 2, 3, 3, 4, 4]) >>> train_indx, test_indx = next( ... GroupShuffleSplit(random_state=7).split(X, y, groups) ... ) >>> X_train, X_test, y_train, y_test = \ ... X[train_indx], X[test_indx], y[train_indx], y[test_indx] >>> X_train.shape, X_test.shape ((6,), (2,)) >>> np.unique(groups[train_indx]), np.unique(groups[test_indx]) (array([1, 2, 4]), array([3])) 3.1.2.6. Cross validation of time series data¶Time series data is characterized by the correlation between observations that are near in time (autocorrelation). However, classical cross-validation techniques such as
3.1.2.6.1. Time Series Split¶
This class can be used to cross-validate time series data samples that are observed at fixed time intervals. Example of 3-split time series cross-validation on a dataset with 6 samples: >>> from sklearn.model_selection import TimeSeriesSplit >>> X = np.array([[1, 2], [3, 4], [1, 2], [3, 4], [1, 2], [3, 4]]) >>> y = np.array([1, 2, 3, 4, 5, 6]) >>> tscv = TimeSeriesSplit(n_splits=3) >>> print(tscv) TimeSeriesSplit(gap=0, max_train_size=None, n_splits=3, test_size=None) >>> for train, test in tscv.split(X): ... print("%s %s" % (train, test)) [0 1 2] [3] [0 1 2 3] [4] [0 1 2 3 4] [5] Here is a visualization of the cross-validation behavior. 3.1.3. A note on shuffling¶If the data ordering is not arbitrary (e.g. samples with the same class label are contiguous), shuffling it first may be essential to get a meaningful cross- validation result. However, the opposite may be true if the samples are not independently and identically distributed. For example, if samples correspond to news articles, and are ordered by their time of publication, then shuffling the data will likely lead to a model that is overfit and an inflated validation score: it will be tested on samples that are artificially similar (close in time) to training samples. Some cross validation iterators, such as
For more details on how to control the randomness of cv splitters and avoid common pitfalls, see Controlling randomness. 3.1.4. Cross validation and model selection¶Cross validation iterators can also be used to directly perform model selection using Grid Search for the optimal hyperparameters of the model. This is the topic of the next section: Tuning the hyper-parameters of an estimator. 3.1.5. Permutation test score¶
A low p-value provides evidence that the dataset contains real dependency between features and labels and the classifier was able to utilize this to obtain good results. A high p-value could be due to a lack of dependency between features and labels (there is no difference in feature values between the classes) or because the classifier was not able to use the dependency in the data. In the latter case, using a more appropriate classifier that is able to utilize the structure in the data, would result in a lower p-value. Cross-validation provides information about how well a
classifier generalizes, specifically the range of expected errors of the classifier. However, a classifier trained on a high dimensional dataset with no structure may still perform better than expected on cross-validation, just by chance. This can typically happen with small datasets with less than a few hundred samples.
It is important to note that this test has been shown to produce low p-values even if there is only weak structure in the data because in the corresponding permutated datasets there is absolutely no structure. This test is therefore only able to show when the model reliably outperforms random guessing. Finally, How do you use kBelow are the steps for it:. Randomly split your entire dataset into k”folds”. For each k-fold in your dataset, build your model on k – 1 folds of the dataset. ... . Record the error you see on each of the predictions.. Repeat this until each of the k-folds has served as the test set.. How do you do kk-Fold cross-validation. Pick a number of folds – k. ... . Split the dataset into k equal (if possible) parts (they are called folds). Choose k – 1 folds as the training set. ... . Train the model on the training set. ... . Validate on the test set.. Save the result of the validation.. Repeat steps 3 – 6 k times.. What is KK-Folds cross-validator. Provides train/test indices to split data in train/test sets. Split dataset into k consecutive folds (without shuffling by default). Each fold is then used once as a validation while the k - 1 remaining folds form the training set. Read more in the User Guide.
How do you cross validate with Sklearn?Computing cross-validated metrics. The simplest way to use cross-validation is to call the cross_val_score helper function on the estimator and the dataset. >>> from sklearn. model_selection import cross_val_score >>> clf = svm.
|