ML0011 Precision and Recall

What are Precision and Recall?

Answer

Precision and recall are two fundamental metrics used to evaluate the performance of classification models, especially when dealing with imbalanced datasets or when the cost of different types of errors varies.

Precision
Precision (also known as positive predictive value) is the ratio of correctly predicted positive observations to the total predicted positives. In other words, it tells you, “When the model predicts a positive, how often is it right?” Mathematically, it’s defined as:

{\large \text{Precision} = \displaystyle\frac{\text{True Positives (TP)}}{\text{True Positives (TP)} + \text{False Positives (FP)}}}

For example, if a spam detector labels 100 emails as spam and 99 of them are actually spam, its precision is 99%.

Recall
Recall (also known as sensitivity or true positive rate) is the ratio of correctly predicted positive observations to all observations that are actually positive. It answers the question, “Out of all the actual positives, how many did the model capture?” Mathematically, it’s defined as:

{\large \text{Recall} = \displaystyle\frac{\text{True Positives (TP)}}{\text{True Positives (TP)} + \text{False Negatives (FN)}}}

For example, if there are 100 spam emails in total and the model correctly identifies 90 of them, its recall is 90%.

True Positives (TP): The model correctly predicts the positive class.
False Positives (FP): The model incorrectly predicts the positive class (it predicted positive, but it was actually negative)
True Negatives (TN): The model correctly predicts the negative class.
False Negatives (FN): The model incorrectly predicts the negative class (it predicted negative, but it was actually positive).


Login to view more content


Did you solve the problem?

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *