How do you find the accuracy of a python model using sklearn?

sklearn.metrics.accuracy_score(y_true, y_pred, *, normalize=True, sample_weight=None)[source]

Accuracy classification score.

In multilabel classification, this function computes subset accuracy: the set of labels predicted for a sample must exactly match the corresponding set of labels in y_true.

Read more in the User Guide.

Parameters:y_true1d array-like, or label indicator array / sparse matrix

Ground truth (correct) labels.

y_pred1d array-like, or label indicator array / sparse matrix

Predicted labels, as returned by a classifier.

normalizebool, default=True

If False, return the number of correctly classified samples. Otherwise, return the fraction of correctly classified samples.

sample_weightarray-like of shape (n_samples,), default=None

Sample weights.

Returns:scorefloat

If normalize == True, return the fraction of correctly classified samples (float), else returns the number of correctly classified samples (int).

The best performance is 1 with normalize == True and the number of samples with normalize == False.

See also

balanced_accuracy_score

Compute the balanced accuracy to deal with imbalanced datasets.

jaccard_score

Compute the Jaccard similarity coefficient score.

hamming_loss

Compute the average Hamming loss or Hamming distance between two sets of samples.

zero_one_loss

Compute the Zero-one classification loss. By default, the function will return the percentage of imperfectly predicted subsets.

Notes

In binary classification, this function is equal to the jaccard_score function.

Examples

>>> from sklearn.metrics import accuracy_score
>>> y_pred = [0, 2, 1, 3]
>>> y_true = [0, 1, 2, 3]
>>> accuracy_score(y_true, y_pred)
0.5
>>> accuracy_score(y_true, y_pred, normalize=False)
2

In the multilabel case with binary label indicators:

>>> import numpy as np
>>> accuracy_score(np.array([[0, 1], [1, 1]]), np.ones((2, 2)))
0.5

Examples using sklearn.metrics.accuracy_score¶

Measuring the performance of your model using the correct metric is a very important step in the data science process. In this tutorial, we’ll look at how to compute the accuracy of your predictions from scratch and with sklearn in Python.

What is accuracy?

Accuracy is one of the most common metrics used to judge the performance of classification models. Accuracy tells us the fraction of labels correctly classified by our model. For example, if out of 100 labels our model correctly classified 70, we say that the model has an accuracy of 0.70

How do you find the accuracy of a python model using sklearn?

Accuracy score in Python from scratch

Let’s write a function in python to compute the accuracy of results given that we have the true labels and the predicted labels from scratch.

def compute_accuracy(y_true, y_pred):
    correct_predictions = 0
    # iterate over each label and check
    for true, predicted in zip(y_true, y_pred):
        if true == predicted:
            correct_predictions += 1
    # compute the accuracy
    accuracy = correct_predictions/len(y_true)
    return accuracy

The above function takes in values for the true labels and the predicted labels as arguments and returns the accuracy score. Here, we count the total number of correct predictions by iterating over each true and predicted label combination in parallel and compute the accuracy by dividing the number of correct predictions by the total labels.

Let’s try the above function on an example.

# sample labels
y_true = [1, 0, 0, 1, 1]
y_pred = [1, 1, 1, 1, 1]
# get the accuracy
compute_accuracy(y_true, y_pred)

Output:

0.6

We get 0.6 as the accuracy because three out of five predictions are correct.

Note that, the above function can be optimized by vectorizing the equality computation using numpy arrays.

Accuracy using Sklearn’s accuracy_score()

You can also get the accuracy score in python using sklearn.metrics’ accuracy_score() function which takes in the true labels and the predicted labels as arguments and returns the accuracy as a float value. sklearn.metrics comes with a number of useful functions to compute common evaluation metrics. For example, let’s compute the accuracy score on the same set of values as above but this time with sklearn’s accuracy_score() function.

from sklearn.metrics import accuracy_score
accuracy_score(y_true, y_pred)

Output:

0.6

You can see that we get an accuracy of 0.6, the same as what we got above using the scratch function. It is recommended that you use the sklearn’s function as it not only is optimized for performance but also comes with additional parameters that might be helpful.

For more on the sklearn’s accuracy_score() function, refer to its documentation.

With this, we come to the end of this tutorial. The code examples and results presented in this tutorial have been implemented in a Jupyter Notebook with a python (version 3.8.3) kernel having numpy version 0.23.1


Subscribe to our newsletter for more informative guides and tutorials.
We do not spam and you can opt out any time.

  • Piyush is a data scientist passionate about using data to understand things better and make informed decisions. In the past, he's worked as a Data Scientist for ZS and holds an engineering degree from IIT Roorkee. His hobbies include watching cricket, reading, and working on side projects.

    View all posts

How does python calculate accuracy in Sklearn?

Here we can use scikit learn accuracy_score for calculating the accuracy of data. y_pred = [0, 5, 2, 4] is used as predicted value that we can choose. y_true = [0, 1, 2, 3] is used as true value that already given. accuracy_score(y_true, y_pred) is used to check the accuracy_score of true value and predicted value.

How do you check accuracy in python?

Train/Test is a method to measure the accuracy of your model. It is called Train/Test because you split the the data set into two sets: a training set and a testing set. 80% for training, and 20% for testing. You train the model using the training set.

How do you find the accuracy of a classification model in python?

In machine learning, accuracy is one of the most important performance evaluation metrics for a classification model. The mathematical formula for calculating the accuracy of a machine learning model is 1 – (Number of misclassified samples / Total number of samples).

How do you find the accuracy of a model?

We calculate accuracy by dividing the number of correct predictions (the corresponding diagonal in the matrix) by the total number of samples. The result tells us that our model achieved a 44% accuracy on this multiclass problem.