Question 1
Which of the following metrics is calculated as the ratio of true positives to the sum of true positives and false positives?
Question 2
In the context of model evaluation, what does a 'True Negative' represent in a confusion matrix?
Question 3
What is the primary purpose of a 'confusion matrix' in evaluating classification models?
Question 4
Which of the following is a common scenario where 'Precision' is a more important metric than 'Recall'?
Question 5
When evaluating a model, what does a high 'False Positive Rate' (FPR) indicate?