False Negative

In machine learning and statistical analysis, a False Negative (FN) is an outcome where the model incorrectly predicts the negative class for an instance that is actually positive.

Understanding False Negatives is crucial, especially in applications where missing true positives can lead to severe consequences, such as in medical diagnoses or fraud detection.


Definition of False Negative

A False Negative occurs when the model predicts a negative outcome (e.g., “No,” “Negative,” or “0”) for a case that is actually positive. For instance, in cancer detection, if a model predicts that a patient does not have cancer but the patient actually does, it is considered a False Negative.


Position of False Negative in a Confusion Matrix

The confusion matrix is a 2×2 table used to evaluate classification models. False Negatives are positioned as follows:

Predicted PositivePredicted Negative
Actual PositiveTrue Positive (TP)False Negative (FN)
Actual NegativeFalse Positive (FP)True Negative (TN)
False Negative in a Confusion Matrix

Detailed Examples with Steps to Calculate False Negative

Below are ten real-world examples that illustrate the concept of False Negatives. Each example provides a scenario, data details, and the calculation of False Negatives step by step.


Example 1: Medical Diagnosis

Scenario: A model predicts whether a patient has a disease.

  • Total patients: 200
  • Actual cases with the disease: 50
  • Predicted as having the disease: 45
  • Missed cases (False Negatives): 5

Steps:

  1. Identify the actual positive cases (patients with the disease).
  2. Count how many of these were incorrectly predicted as negative.
  3. Here, FN = 5.

Example 2: Spam Email Detection

Scenario: A model predicts whether an email is spam.

  • Total emails: 100
  • Actual spam emails: 30
  • Predicted as spam: 25
  • Missed spam emails: 5

Steps:

  1. Count the number of emails that were actually spam but were not predicted as spam.
  2. Here, FN = 5.

Example 3: Fraud Detection

Scenario: A model predicts whether a transaction is fraudulent.

  • Total transactions: 1,000
  • Actual fraudulent transactions: 100
  • Predicted as fraudulent: 90
  • Missed fraudulent transactions: 10

Steps:

  1. Identify the actual fraudulent transactions.
  2. Count how many of these were not predicted as fraudulent.
  3. Here, FN = 10.

Example 4: Cancer Detection

Scenario: A model predicts whether a patient has cancer.

  • Total patients: 500
  • Actual cases with cancer: 80
  • Predicted as having cancer: 75
  • Missed cancer cases: 5

Steps:

  1. Identify the patients who actually have cancer.
  2. Count how many were incorrectly predicted as not having cancer.
  3. Here, FN = 5.

Example 5: Loan Default Prediction

Scenario: A model predicts whether a customer will default on a loan.

  • Total customers: 800
  • Actual defaulters: 120
  • Predicted defaulters: 110
  • Missed defaulters: 10

Steps:

  1. Identify the actual defaulters.
  2. Count how many were not predicted as defaulters.
  3. Here, FN = 10.

Example 6: Product Defect Detection

Scenario: A model predicts whether a product is defective.

  • Total products: 500
  • Actual defective products: 50
  • Predicted defective products: 48
  • Missed defective products: 2

Steps:

  1. Identify the actual defective products.
  2. Count how many were not predicted as defective.
  3. Here, FN = 2.

Example 7: Sentiment Analysis

Scenario: A model predicts whether a product review is positive.

  • Total reviews: 1,000
  • Actual positive reviews: 600
  • Predicted positive reviews: 580
  • Missed positive reviews: 20

Steps:

  1. Identify the actual positive reviews.
  2. Count how many were not predicted as positive.
  3. Here, FN = 20.

Example 8: Disease Screening

Scenario: A model predicts whether a person has a disease based on screening tests.

  • Total participants: 300
  • Actual cases with the disease: 50
  • Predicted as having the disease: 45
  • Missed disease cases: 5

Steps:

  1. Count the participants who actually have the disease.
  2. Identify how many of these were not predicted as having the disease.
  3. Here, FN = 5.

Example 9: Face Recognition

Scenario: A model predicts whether a face belongs to a specific person.

  • Total faces: 400
  • Actual matches: 100
  • Predicted matches: 95
  • Missed matches: 5

Steps:

  1. Identify the actual matches.
  2. Count how many were not predicted as matches.
  3. Here, FN = 5.

Example 10: Object Detection

Scenario: A model predicts whether an object in an image is a cat.

  • Total objects: 1,000
  • Actual cats: 150
  • Predicted cats: 140
  • Missed cats: 10

Steps:

  1. Count the objects that are actually cats.
  2. Identify how many were not predicted as cats.
  3. Here, FN = 10.

Conclusion

False Negatives are a critical metric for evaluating classification models, especially in high-stakes applications. By understanding and minimizing False Negatives, models can improve recall and ensure fewer missed positive cases, leading to better overall performance.