'Institute of Electrical and Electronics Engineers (IEEE)'
Doi
Abstract
Explainable AI was born as a pathway to allow humans to explore and understand the inner working of complex systems. Though, establishing what is an explanation and objectively evaluating explainability, are not trivial tasks. With this paper, we present a new model-agnostic metric to measure the Degree of Explainability of (correct) information in an objective way, exploiting a specific theoretical model from Ordinary Language Philosophy called the Achinstein’s Theory of Explanations, implemented with an algorithm relying on deep language models for knowledge graph extraction and information retrieval. In order to understand whether this metric is actually behaving as explainability is expected to, we have devised an experiment on two realistic Explainable AI-based systems for healthcare and finance, using famous AI technology including Artificial Neural Networks and TreeSHAP. The results we obtained suggest that our proposed metric for measuring the Degree of Explainability is robust on several scenario