Machine learning is used more and more often for sensitive applications,
sometimes replacing humans in critical decision-making processes. As such,
interpretability of these algorithms is a pressing need. One popular algorithm
to provide interpretability is LIME (Local Interpretable Model-Agnostic
Explanation). In this paper, we provide the first theoretical analysis of LIME.
We derive closed-form expressions for the coefficients of the interpretable
model when the function to explain is linear. The good news is that these
coefficients are proportional to the gradient of the function to explain: LIME
indeed discovers meaningful features. However, our analysis also reveals that
poor choices of parameters can lead LIME to miss important features.Comment: Accepted to AISTATS 202