After the predictive model has been finished, the most important question is: How good is it? Does it predict well? To know about this we have two important metrics called precision and recall. In this article we will learn about these two metrics in brief.

Evaluating the model is one of the most important tasks in the data science project, it indicates how good predictions are. Very often for classification problems we look at metrics called precision and recall, to define them in detail let’s quickly introduce confusion matrix first.

**A confusion matrix** is a table that is used to measure the performance of the machine learning classification model(typically for supervised learning, in the case of unsupervised learning is usually called the matching matrix) where output can be two or more classes. Each row of the **confusion matrix** represents the instances in a predicted class while each column represents the instance in an actual class or vice versa.

**A confusion matrix** is also known as an **error matrix**.

**Also Read:**

- Data Pre-Processing
- What is cross-validation in Machine Learning ?
- Getting started with Machine learning
- Loss Functions| Cost Functions in Machine Learning

In this article, we will be dealing with the various parameters of the confusion matrix and the information that we can extract from it. The structure of the confusion matrix is as shown in the figure below.

Now let’s understand what are **TP, FP, FN, TN**.

Here we have two classes, ** Yes** and

**TP-True positive**: You predictedclass and its actual class is also*Yes*.*Yes***TN-True negative**: You predictedclass and its actual class is*No*.*No***FP-False positive:**You predicted theclass but actually it belongs to the*Yes*class. It is also called*No*.*type 1 error***FN-False Negative:**You predictedclass but actually it belongs to the*No*class. It is also called a*Yes*.*type II error*

So, what are the classification performance metrics that we can calculate from the confusion matrix? Let’s see.

By observing the confusion matrix we can calculate the ** Accuracy**,

Information we obtain from the above confusion matrix:

- There are altogether 165 data points (i.e. observations or objects) and they are classified into two classes
and*Yes*.*No* - Our classification model predicted
times, and*Yes,*110times But according to the actual classification, there are altogether*No,*55**105,**and*Yes***60,***No’s.*

The confusion matrix including the above calculations is as given below,

Understanding the confusion matrix, calculating precision and recall is easy.

**Precision** – is the ratio of correctly predicted positive observations to the total predicted positive observations, or what percent of positive predictions were correct?

Precision = TP/TP+FP

**Recall **– also called sensitivity, is the ratio of correctly predicted positive observations to all observations in actual class – yes, or what percent of the positive cases did you catch?

Recall = TP/TP+FN

There are also two more useful matrices coming from confusion matrix, Accuracy – correctly predicted observation to the total observations and F1 score the weighted average of Precision and Recall. Although intuitively it is not as easy to understand as accuracy, the F1 score is usually more useful than accuracy, especially if you have an uneven class distribution.

Example Python Code to get Precision and Recall:

```
from sklearn.linear_model import LogisticRegression
from sklearn import datasets
from sklearn.cross_validation import train_test_split
from sklearn.metrics import precision_recall_fscore_support as score
data = datasets.load_iris()
X = data['data']
y = data['target']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
model = LogisticRegression()
model.fit(X_train,y_train)
preds = model.predict(X_test)
precision, recall, fscore, support = score(y_test, preds)
print('precision:',precision)
print('recall:',recall)
```

Post a Comment

No Comments