Explain to whom? Putting the User in the Center of Explainable AI

Abstract

International audienceThe ability to explain actions and decisions is often regarded as a basic ingredient of cognitive systems. But when researchers propose methods for making AI systems understandable, users are usually not involved or even mentioned. However, the purpose is to make people willing to accept the decision of a machine or to be better able to interact with it. Therefore, I argue that the evaluation of explanations must involve some form of user testing

    Similar works