1,687 research outputs found

    An explainable multi-attribute decision model based on argumentation

    Get PDF
    We present a multi-attribute decision model and a method for explaining the decisions it recommends based on an argumentative reformulation of the model. Specifically, (i) we define a notion of best (i.e., minimally redundant) decisions amounting to achieving as many goals as possible and exhibiting as few redundant attributes as possible, and (ii) we generate explanations for why a decision is best or better than or as good as another, using a mapping between the given decision model and an argumentation framework, such that best decisions correspond to admissible sets of arguments. Concretely, natural language explanations are generated automatically from dispute trees sanctioning the admissibility of arguments. Throughout, we illustrate the power of our approach within a legal reasoning setting, where best decisions amount to past cases that are most similar to a given new, open case. Finally, we conduct an empirical evaluation of our method with legal practitioners, confirming that our method is effective for the choice of most similar past cases and helpful to understand automatically generated recommendations

    Explainable and Ethical AI: A Perspective on Argumentation and Logic Programming

    Get PDF
    In this paper we sketch a vision of explainability of intelligent systems as a logic approach suitable to be injected into and exploited by the system actors once integrated with sub-symbolic techniques. In particular, we show how argumentation could be combined with different extensions of logic programming – namely, abduction, inductive logic programming, and probabilistic logic programming – to address the issues of explainable AI as well as some ethical concerns about AI

    Identifying Reasons for Bias: An Argumentation-Based Approach

    Full text link
    As algorithmic decision-making systems become more prevalent in society, ensuring the fairness of these systems is becoming increasingly important. Whilst there has been substantial research in building fair algorithmic decision-making systems, the majority of these methods require access to the training data, including personal characteristics, and are not transparent regarding which individuals are classified unfairly. In this paper, we propose a novel model-agnostic argumentation-based method to determine why an individual is classified differently in comparison to similar individuals. Our method uses a quantitative argumentation framework to represent attribute-value pairs of an individual and of those similar to them, and uses a well-known semantics to identify the attribute-value pairs in the individual contributing most to their different classification. We evaluate our method on two datasets commonly used in the fairness literature and illustrate its effectiveness in the identification of bias.Comment: 10 page

    Context-aware feature attribution through argumentation

    Full text link
    Feature attribution is a fundamental task in both machine learning and data analysis, which involves determining the contribution of individual features or variables to a model's output. This process helps identify the most important features for predicting an outcome. The history of feature attribution methods can be traced back to General Additive Models (GAMs), which extend linear regression models by incorporating non-linear relationships between dependent and independent variables. In recent years, gradient-based methods and surrogate models have been applied to unravel complex Artificial Intelligence (AI) systems, but these methods have limitations. GAMs tend to achieve lower accuracy, gradient-based methods can be difficult to interpret, and surrogate models often suffer from stability and fidelity issues. Furthermore, most existing methods do not consider users' contexts, which can significantly influence their preferences. To address these limitations and advance the current state-of-the-art, we define a novel feature attribution framework called Context-Aware Feature Attribution Through Argumentation (CA-FATA). Our framework harnesses the power of argumentation by treating each feature as an argument that can either support, attack or neutralize a prediction. Additionally, CA-FATA formulates feature attribution as an argumentation procedure, and each computation has explicit semantics, which makes it inherently interpretable. CA-FATA also easily integrates side information, such as users' contexts, resulting in more accurate predictions

    CBR driven interactive explainable AI.

    Get PDF
    Explainable AI (XAI) can greatly enhance user trust and satisfaction in AI-assisted decision-making processes. Numerous explanation techniques (explainers) exist in the literature, and recent findings suggest that addressing multiple user needs requires employing a combination of these explainers. We refer to such combinations as explanation strategies. This paper introduces iSee - Intelligent Sharing of Explanation Experience, an interactive platform that facilitates the reuse of explanation strategies and promotes best practices in XAI by employing the Case-based Reasoning (CBR) paradigm. iSee uses an ontology-guided approach to effectively capture explanation requirements, while a behaviour tree-driven conversational chatbot captures user experiences of interacting with the explanations and provides feedback. In a case study, we illustrate the iSee CBR system capabilities by formalising a realworld radiograph fracture detection system and demonstrating how each interactive tools facilitate the CBR processes
    • …
    corecore