36 research outputs found

    To trust or not to trust an explanation: using LEAF to evaluate local linear XAI methods

    Get PDF
    The main objective of eXplainable Artificial Intelligence (XAI) is to provide effective explanations for black-box classifiers. The existing literature lists many desirable properties for explanations to be useful, but there is no consensus on how to quantitatively evaluate explanations in practice. Moreover, explanations are typically used only to inspect black-box models, and the proactive use of explanations as a decision support is generally overlooked. Among the many approaches to XAI, a widely adopted paradigm is Local Linear Explanations - with LIME and SHAP emerging as state-of-the-art methods. We show that these methods are plagued by many defects including unstable explanations, divergence of actual implementations from the promised theoretical properties, and explanations for the wrong label. This highlights the need to have standard and unbiased evaluation procedures for Local Linear Explanations in the XAI field. In this paper we address the problem of identifying a clear and unambiguous set of metrics for the evaluation of Local Linear Explanations. This set includes both existing and novel metrics defined specifically for this class of explanations. All metrics have been included in an open Python framework, named LEAF. The purpose of LEAF is to provide a reference for end users to evaluate explanations in a standardised and unbiased way, and to guide researchers towards developing improved explainable techniques.Comment: 16 pages, 8 figure

    starMC: an automata based CTL* model checker

    Get PDF
    Model-checking of temporal logic formulae is a widely used technique for the verification of systems. CTL [Image: see text] is a temporal logic that allows to consider an intermix of both branching behaviours (like in CTL) and linear behaviours (LTL), overcoming the limitations of LTL (that cannot express “possibility”) and CTL (cannot fully express fairness). Nevertheless CTL [Image: see text] model-checkers are uncommon. This paper presents (1) the algorithms for a fully symbolic automata-based approach for CTL [Image: see text] , and (2) their implementation in the open-source tool starMC, a CTL [Image: see text] model checker for systems specified as Petri nets. Testing has been conducted on thousands of formulas over almost a hundred models. The experiments show that the fully symbolic automata-based approach of starMC can compute the set of states that satisfy a CTL [Image: see text] formula for very large models (non trivial formulas for state spaces larger than 10(480) states are evaluated in less than a minute)
    corecore