Question 1
Which of the following best describes saliency maps?
Question 2
What is the main advantage of using LIME for model interpretability?
Question 3
In the context of SHAP, what does the term 'Shapley value' refer to?
Question 4
Why is it important to use model-agnostic techniques like SHAP and LIME?
Question 5
What is the primary goal of using attribution methods in model interpretability?