True Negative
In machine learning and statistics, True Negative (TN) refers to an outcome where the model correctly predicts the negative class for an instance that is actually negative. True Negatives are crucial in evaluating a model’s performance, especially when minimizing incorrect predictions is essential.
Definition of True Negative
A True Negative occurs when the model predicts a negative outcome (e.g., “No,” “Negative,” or “0”) for a case that is truly negative. For example, in a fraud detection system, if a transaction is not fraudulent and the model correctly predicts it as non-fraudulent, this is a True Negative.
Position of True Negative in a Confusion Matrix
The confusion matrix is a 2×2 table used to assess the performance of classification models. True Negatives are positioned as follows:
Predicted Positive | Predicted Negative | |
Actual Positive | True Positive (TP) | False Negative (FN) |
Actual Negative | False Positive (FP) | True Negative (TN) |
Detailed Examples with Steps to Calculate True Negative
Below are ten real-world examples that explain the concept of True Negatives. Each example includes a scenario, details, and the calculation of True Negatives step by step.
Example 1: Medical Diagnosis
Scenario: A model predicts whether a patient has a disease.
- Total patients: 200
- Actual patients without the disease: 150
- Predicted as not having the disease: 140
- Correct predictions (True Negatives): 140
Steps:
- Identify the actual negative cases (patients without the disease).
- Count how many of these were correctly predicted as negative.
- Here, TN = 140.
Example 2: Spam Email Detection
Scenario: A model predicts whether an email is spam.
- Total emails: 100
- Actual non-spam emails: 70
- Predicted as non-spam: 68
- Correct predictions (True Negatives): 68
Steps:
- Count the number of emails that are actually non-spam.
- Count how many of these were correctly predicted as non-spam.
- Here, TN = 68.
Example 3: Fraud Detection
Scenario: A model predicts whether a transaction is fraudulent.
- Total transactions: 1,000
- Actual non-fraudulent transactions: 900
- Predicted as non-fraudulent: 890
- Correct predictions (True Negatives): 890
Steps:
- Identify the actual non-fraudulent transactions.
- Count how many were correctly predicted as non-fraudulent.
- Here, TN = 890.
Example 4: Cancer Detection
Scenario: A model predicts whether a patient has cancer.
- Total patients: 500
- Actual patients without cancer: 420
- Predicted as not having cancer: 415
- Correct predictions (True Negatives): 415
Steps:
- Identify the patients who do not have cancer.
- Count how many were correctly predicted as not having cancer.
- Here, TN = 415.
Example 5: Loan Default Prediction
Scenario: A model predicts whether a customer will default on a loan.
- Total customers: 800
- Actual non-defaulters: 650
- Predicted as non-defaulters: 640
- Correct predictions (True Negatives): 640
Steps:
- Identify the actual non-defaulters.
- Count how many were correctly predicted as non-defaulters.
- Here, TN = 640.
Example 6: Product Defect Detection
Scenario: A model predicts whether a product is defective.
- Total products: 500
- Actual non-defective products: 450
- Predicted as non-defective: 445
- Correct predictions (True Negatives): 445
Steps:
- Identify the actual non-defective products.
- Count how many were correctly predicted as non-defective.
- Here, TN = 445.
Example 7: Sentiment Analysis
Scenario: A model predicts whether a product review is negative.
- Total reviews: 1,000
- Actual negative reviews: 300
- Predicted as negative: 290
- Correct predictions (True Negatives): 290
Steps:
- Identify the actual negative reviews.
- Count how many were correctly predicted as negative.
- Here, TN = 290.
Example 8: Disease Screening
Scenario: A model predicts whether a person has a disease based on screening tests.
- Total participants: 300
- Actual participants without the disease: 250
- Predicted as not having the disease: 245
- Correct predictions (True Negatives): 245
Steps:
- Count the participants who actually do not have the disease.
- Count how many were correctly predicted as not having the disease.
- Here, TN = 245.
Example 9: Face Recognition
Scenario: A model predicts whether a face belongs to a specific person.
- Total faces: 400
- Actual non-matching faces: 300
- Predicted as non-matching: 295
- Correct predictions (True Negatives): 295
Steps:
- Identify the actual non-matching faces.
- Count how many were correctly predicted as non-matching.
- Here, TN = 295.
Example 10: Object Detection
Scenario: A model predicts whether an object in an image is a dog.
- Total objects: 1,000
- Actual non-dog objects: 800
- Predicted as non-dog: 790
- Correct predictions (True Negatives): 790
Steps:
- Count the objects that are not dogs.
- Identify how many were correctly predicted as not dogs.
- Here, TN = 790.
Conclusion
True Negatives are an essential metric in evaluating classification models. They indicate how effectively a model avoids false alarms by correctly identifying negative cases. By maximizing True Negatives, models can reduce the number of incorrect predictions, improving precision and accuracy in real-world applications.