Besides accuracy, recent studies on machine learning models have been
addressing the question on how the obtained results can be interpreted. Indeed,
while complex machine learning models are able to provide very good results in
terms of accuracy even in challenging applications, it is difficult to
interpret them. Aiming at providing some interpretability for such models, one
of the most famous methods, called SHAP, borrows the Shapley value concept from
game theory in order to locally explain the predicted outcome of an instance of
interest. As the SHAP values calculation needs previous computations on all
possible coalitions of attributes, its computational cost can be very high.
Therefore, a SHAP-based method called Kernel SHAP adopts an efficient strategy
that approximate such values with less computational effort. In this paper, we
also address local interpretability in machine learning based on Shapley
values. Firstly, we provide a straightforward formulation of a SHAP-based
method for local interpretability by using the Choquet integral, which leads to
both Shapley values and Shapley interaction indices. Moreover, we also adopt
the concept of k-additive games from game theory, which contributes to reduce
the computational effort when estimating the SHAP values. The obtained results
attest that our proposal needs less computations on coalitions of attributes to
approximate the SHAP values