Understanding AI Decisions: LIME vs SHAP

Wilson
4 min readNov 12, 2024

--

As artificial intelligence and machine learning models become more prevalent in our lives, there is a growing need to understand how these “black box” systems make decisions. Two popular techniques for explaining AI models are LIME (Local Interpretable Model-agnostic Explanations) and SHAP (SHapley Additive exPlanations). Let’s explore how these methods work and compare their approaches.

What is LIME?

LIME aims to explain individual predictions made by complex AI models in an interpretable way. The key idea behind LIME is to approximate the behavior of the complex model locally around a specific prediction using a simpler, interpretable model.

Here’s how LIME works:

  1. It takes a prediction you want to explain and creates variations of that input by tweaking the features slightly.
  2. It runs these variations through the original complex model to get predictions.
  3. It then trains a simple linear model on this set of variations to approximate the complex model’s behavior in that local area.
  4. The coefficients of this linear model show which features were most important for that specific prediction.

For example, if LIME is explaining an image classification model’s decision that a photo contains a dog, it might highlight the areas of the image that most contributed to that classification — like the shape of the ears or snout.

Cited: Image from MinnaLearn — [15]

What is SHAP?

SHAP takes a different approach, based on concepts from game theory. It aims to fairly distribute the impact of each feature on a model’s prediction.

The key ideas behind SHAP are:

  1. It considers all possible combinations of features being present or absent.
  2. For each feature, it calculates the average change in prediction when that feature is added to different combinations of other features.
  3. This average impact becomes the SHAP value for that feature.

SHAP provides both local explanations (for individual predictions) and global explanations (feature importance across the whole dataset).

Cited: Image from MinnaLearn — [16]

Comparing LIME and SHAP

Both LIME and SHAP help explain AI decisions, but they have some key differences:

Which Should You Use?

The choice between LIME and SHAP often depends on your specific needs:

  • If you need quick, approximate explanations for individual predictions, LIME might be preferable.
  • If you need mathematically rigorous explanations that can be aggregated across a dataset, SHAP may be a better choice.
  • For image or text data, LIME’s visualizations can be particularly intuitive.
  • For tabular data, SHAP’s ability to provide global feature importance can be very useful.

In many cases, using both methods can provide complementary insights into your AI model’s decision-making process.

Conclusion

As AI systems become more complex and ubiquitous, tools like LIME and SHAP play a crucial role in building trust and understanding. By making AI decisions more transparent and interpretable, these techniques help bridge the gap between powerful machine learning models and the humans who need to understand and act on their outputs.

Whether you’re a data scientist trying to debug a model, a business user making decisions based on AI predictions, or a regulator ensuring fair and ethical use of AI, techniques like LIME and SHAP are invaluable tools for peering into the “black box” of modern machine learning.

Reference:

[1] Lime: Explaining the predictions of any machine learning classifier https://github.com/marcotcr/lime

[2] shap/shap: A game theoretic approach to explain the output of any … https://github.com/shap/shap/milestone/2?closed=1

[3] How to Use SHAP Values to Optimize and Debug ML Models https://neptune.ai/blog/shap-values

[4] Build a LIME explainer dashboard with the fewest lines of code https://towardsdatascience.com/build-a-lime-explainer-dashboard-with-the-fewest-lines-of-code-bfe12e4592d4?gi=7829097b203d

[5] helenaEH/SHAP_tutorial: Tutorial on how to use the SHAP library to … https://github.com/helenaEH/SHAP_tutorial

[6] Exploring Explainable AI with LIME Technology — Steadforce https://www.steadforce.com/blog/explainable-ai-with-lime

[7] A Non-Technical Guide to Interpreting SHAP Analyses — Aidan Cooper https://www.aidancooper.co.uk/a-non-technical-guide-to-interpreting-shap-analyses/

[8] LIME user manual — LIME documentation — Read the Docs https://lime.readthedocs.io/en/latest/usermanual.html

[9] An introduction to explainable AI with Shapley values https://shap.readthedocs.io/en/latest/example_notebooks/overviews/An introduction to explainable AI with Shapley values.html

[10] Understanding model predictions with LIME | by Lars Hulstaert https://towardsdatascience.com/understanding-model-predictions-with-lime-a582fdff3a3b?gi=885e61cf51b0

[11] Using SHAP Values to Explain How Your Machine Learning Model … https://towardsdatascience.com/using-shap-values-to-explain-how-your-machine-learning-model-works-732b3f40e137?gi=e61fe47b8de7

[12] Interpreting Classification Model with LIME — Algotech https://algotech.netlify.app/blog/interpreting-classification-model-with-lime/

[13] How to interpret and explain your machine learning models using … https://m.mage.ai/how-to-interpret-and-explain-your-machine-learning-models-using-shap-values-471c2635b78e?gi=1c1234a0fee5

[14] TrustyAI SHAP: Overview and Examples — KIE Community https://blog.kie.org/2021/11/trustyai-shap-overview-and-examples.html

[15] Advanced Trustworthy AI — Applied explainable AI https://courses.minnalearn.com/en/courses/advanced-trustworthy-ai/preview/dissecting-the-internal-logic-of-machine-learning/applied-explainable-ai/

[16] Trustworthy AI — Types of explainable AI https://courses.minnalearn.com/en/courses/trustworthy-ai/preview/explainability/types-of-explainable-ai/

--

--

Wilson
Wilson

Written by Wilson

I am a friendly youth from Formosa(Taiwan) with passion and enthusiasm. I seek to integrate technology and finance to bring more fresh things to people’s lives.