1,060,274 research outputs found

    Explanation for case-based reasoning via abstract argumentation

    Get PDF
    Case-based reasoning (CBR) is extensively used in AI in support of several applications, to assess a new situation (or case) by recollecting past situations (or cases) and employing the ones most similar to the new situation to give the assessment. In this paper we study properties of a recently proposed method for CBR, based on instantiated Abstract Argumentation and referred to as AA-CBR, for problems where cases are represented by abstract factors and (positive or negative) outcomes, and an outcome for a new case, represented by abstract factors, needs to be established. In addition, we study properties of explanations in AA-CBR and define a new notion of lean explanations that utilize solely relevant cases. Both forms of explanations can be seen as dialogical processes between a proponent and an opponent, with the burden of proof falling on the proponent

    "How Effective Is Japanese Foreign Aid? Econometric Results from a Bounded Rationality Model for Indonesia"

    Get PDF
    How does Japanese aid influence the allocation of government expenditures and the raising of government revenues? Using a non-linear model with an asymmetric loss function the case of Japanese aid to Indonesia is examined at the macroeconomic level. It turns out that Japanese aid led to proportionately more development expenditures than other aid. It also might have been positively related to an increased effort by the Indonesian government to raise taxes. Economic explanations based on a bounded rationality models are advanced. Econometric and institutional explanations are also offered. The three sets of explanations can be seen as overlapping and complementary.

    KLEOR: A Knowledge Lite Approach to Explanation Oriented Retrieval

    Get PDF
    In this paper, we describe precedent-based explanations for case-based classification systems. Previous work has shown that explanation cases that are more marginal than the query case, in the sense of lying between the query case and the decision boundary, are more convincing explanations. We show how to retrieve such explanation cases in a way that requires lower knowledge engineering overheads than previously. We evaluate our approaches empirically, finding that the explanations that our systems retrieve are often more convincing than those found by the previous approach. The paper ends with a thorough discussion of a range of factors that affect precedent-based explanations, many of which warrant further research

    Explaining Data-Driven Decisions made by AI Systems: The Counterfactual Approach

    Full text link
    We examine counterfactual explanations for explaining the decisions made by model-based AI systems. The counterfactual approach we consider defines an explanation as a set of the system's data inputs that causally drives the decision (i.e., changing the inputs in the set changes the decision) and is irreducible (i.e., changing any subset of the inputs does not change the decision). We (1) demonstrate how this framework may be used to provide explanations for decisions made by general, data-driven AI systems that may incorporate features with arbitrary data types and multiple predictive models, and (2) propose a heuristic procedure to find the most useful explanations depending on the context. We then contrast counterfactual explanations with methods that explain model predictions by weighting features according to their importance (e.g., SHAP, LIME) and present two fundamental reasons why we should carefully consider whether importance-weight explanations are well-suited to explain system decisions. Specifically, we show that (i) features that have a large importance weight for a model prediction may not affect the corresponding decision, and (ii) importance weights are insufficient to communicate whether and how features influence decisions. We demonstrate this with several concise examples and three detailed case studies that compare the counterfactual approach with SHAP to illustrate various conditions under which counterfactual explanations explain data-driven decisions better than importance weights

    Convex Equipartitions via Equivariant Obstruction Theory

    Full text link
    We describe a regular cell complex model for the configuration space F(\R^d,n). Based on this, we use Equivariant Obstruction Theory to prove the prime power case of the conjecture by Nandakumar and Ramana Rao that every polygon can be partitioned into n convex parts of equal area and perimeter.Comment: Revised and improved version with extra explanations, 20 pages, 7 figures, to appear in Israel J. Mat

    EU Legitimacy and Social Affiliation: A case study of engineers in Europe

    Get PDF
    Analyses of European governance usually put the member states in the foreground, placing the citizens in the background. This article brings explanations of EU legitimacy down to the level of individuals. A method is suggested that combines explanations based on individual interests and a sociological approach to identity. The paper investigates how work organisations become levers for a European outlook that may release legitimising from its national context. The individual level analysis is carried out for one particular occupational group (engineers) and the research questions are elucidated by case studies.social capital; social identity; civil society; open methods of coordination

    Enhancing cluster analysis with explainable AI and multidimensional cluster prototypes

    Get PDF
    Explainable Artificial Intelligence (XAI) aims to introduce transparency and intelligibility into the decision-making process of AI systems. Most often, its application concentrates on supervised machine learning problems such as classification and regression. Nevertheless, in the case of unsupervised algorithms like clustering, XAI can also bring satisfactory results. In most cases, such application is based on the transformation of an unsupervised clustering task into a supervised one and providing generalised global explanations or local explanations based on cluster centroids. However, in many cases, the global explanations are too coarse, while the centroid-based local explanations lose information about cluster shape and distribution. In this paper, we present a novel approach called ClAMP (Cluster Analysis with Multidimensional Prototypes) that aids experts in cluster analysis with human-readable rule-based explanations. The developed state-of-the-art explanation mechanism is based on cluster prototypes represented by multidimensional bounding boxes. This allows representing of arbitrary shaped clusters and combines the strengths of local explanations with the generality of global ones. We demonstrate and evaluate the use of our approach in a real-life industrial case study from the domain of steel manufacturing as well as on the benchmark datasets. The explanations generated with ClAMP were more precise than either centroid-based or global ones
    corecore