17 research outputs found

    The Bayesian Case Model: A Generative Approach for Case-Based Reasoning and Prototype Classification

    Get PDF
    We present the Bayesian Case Model (BCM), a general framework for Bayesian case-based reasoning (CBR) and prototype classification and clustering. BCM brings the intuitive power of CBR to a Bayesian generative framework. The BCM learns prototypes, the "quintessential" observations that best represent clusters in a dataset, by performing joint inference on cluster labels, prototypes and important features. Simultaneously, BCM pursues sparsity by learning subspaces, the sets of features that play important roles in the characterization of the prototypes. The prototype and subspace representation provides quantitative benefits in interpretability while preserving classification accuracy. Human subject experiments verify statistically significant improvements to participants' understanding when using explanations produced by BCM, compared to those given by prior art.Comment: Published in Neural Information Processing Systems (NIPS) 2014, Neural Information Processing Systems (NIPS) 201

    Actionable feature discovery in counterfactuals using feature relevance explainers.

    Get PDF
    Counterfactual explanations focus on 'actionable knowledge' to help end-users understand how a Machine Learning model outcome could be changed to a more desirable outcome. For this purpose a counterfactual explainer needs to be able to reason with similarity knowledge in order to discover input dependencies that relate to outcome changes. Identifying the minimum subset of feature changes to action a change in the decision is an interesting challenge for counterfactual explainers. In this paper we show how feature relevance based explainers (i.e. LIME, SHAP), can inform a counterfactual explainer to identify the minimum subset of 'actionable features'. We demonstrate our DisCERN (Discovering Counterfactual Explanations using Relevance Features from Neighbourhoods) algorithm on three datasets and compare against the widely used counterfactual approach DiCE. Our preliminary results show that DisCERN to be a viable strategy that should be adopted to minimise the actionable changes

    Play MNIST For Me! User Studies on the Effects of Post-Hoc, Example-Based Explanations & Error Rates on Debugging a Deep Learning, Black-Box Classifier

    Get PDF
    This paper reports two experiments (N=349) on the impact of post hoc explanations by example and error rates on peoples perceptions of a black box classifier. Both experiments show that when people are given case based explanations, from an implemented ANN CBR twin system, they perceive miss classifications to be more correct. They also show that as error rates increase above 4%, people trust the classifier less and view it as being less correct, less reasonable and less trustworthy. The implications of these results for XAI are discussed.Comment: 2 Figures, 1 Table, 8 page

    Symbolic Explanation of Similarities in Case-based Reasoning

    Get PDF
    CBR systems solve problems by assessing their similarity with already solved problems (cases). Explanation of a CBR system prediction usually consists of showing the user the set of cases that are most similar to the current problem. Examining those retrieved cases the user can then assess whether the prediction is sensible. Using the notion of symbolic similarity, our proposal is to show the user a symbolic description that makes explicit what the new problem has in common with the retrieved cases. Specifically, we use the notion of anti-unification (least general generalization) to build symbolic similarity descriptions. We present an explanation scheme using anti-unification for CBR systems applied to classification tasks. This scheme focuses on symbolically describing what is shared between the current problem and the retrieved cases that belong to different classes. Examining these descriptions of symbolic similarities the user can assess which aspects are determining that a problem is classified one way or another. The paper exemplifies this proposal with an implemented application of the symbolic similarity scheme to the domain of predicting the carcinogenic activity of chemical compounds

    KLEOR: A Knowledge Lite Approach to Explanation Oriented Retrieval

    Get PDF
    In this paper, we describe precedent-based explanations for case-based classification systems. Previous work has shown that explanation cases that are more marginal than the query case, in the sense of lying between the query case and the decision boundary, are more convincing explanations. We show how to retrieve such explanation cases in a way that requires lower knowledge engineering overheads than previously. We evaluate our approaches empirically, finding that the explanations that our systems retrieve are often more convincing than those found by the previous approach. The paper ends with a thorough discussion of a range of factors that affect precedent-based explanations, many of which warrant further research

    Towards Personalized Explanations for AI Systems: Designing a Role Model for Explainable AI in Auditing

    Get PDF
    Due to a continuously growing repertoire of available methods and applications, Artificial Intelligence (AI) is becoming an innovation driver for most industries. In the auditing domain, initial approaches of AI have already been discussed in scientific discourse, but practical application is still lagging behind. Caused by a highly regulated environment, the explainability of AI is of particular relevance. Using semi-structured expert interviews, we identified stakeholder specific requirements regarding explainable AI (XAI) in auditing. To address the needs of all involved stakeholders a theoretical role model for AI systems has been designed based on a systematic literature review. The role model has been instantiated and evaluated in the domain of financial statement auditing using focus groups of domain experts. The resulting model offers a foundation for the development of AI systems with personalized explanations and an optimized usage of existing XAI methods
    corecore