80 research outputs found

    Contrastive Explanations for Argumentation-Based Conclusions

    Full text link
    In this paper we discuss contrastive explanations for formal argumentation - the question why a certain argument (the fact) can be accepted, whilst another argument (the foil) cannot be accepted under various extension-based semantics. The recent work on explanations for argumentation-based conclusions has mostly focused on providing minimal explanations for the (non-)acceptance of arguments. What is still lacking, however, is a proper argumentation-based interpretation of contrastive explanations. We show under which conditions contrastive explanations in abstract and structured argumentation are meaningful, and how argumentation allows us to make implicit foils explicit

    Epistemic Effects of Scientific Interaction: Approaching the Question with an Argumentative Agent-Based Model

    Get PDF
    The question whether increased interaction among scientists is beneficial or harmful for their efficiency in acquiring knowledge has in recent years been tackled by means of agent-based models (ABMs) (e.g. Zollman 2007, 2010; Grim 2009; Grim et al. 2013). Nevertheless, the relevance of some of these results for actual scientific practice has been questioned in view of specific parameter choices used in the simulations (Rosenstock et al. 2016). In this paper we present a novel ABM that aims at tackling the same question, while representing scientific interaction in terms of argumentative exchange. In this way we examine the robustness of previously obtained results under different modeling choices

    Modeling Contrastiveness in Argumentation.

    Get PDF
    Modeling contrastive explanations for the use in artificial intelligence (AI) applications is an important research branch within the field of explainable AI (XAI). However, most of the existing contrastive XAI approaches are not based on the findings in the literature from the social sciences on contrastiveness in human reasoning and human explanations. In this work we collect the various types of contrastiveness proposed in the literature and model these with formal argumentation. The result is a variety of argumentation-based methods for contrastive explanations, based on the available literature and applicable in a wide variety of AI-applications

    Human-centred explanation of rule-based decision-making systems in the legal domain

    Full text link
    We propose a human-centred explanation method for rule-based automated decision-making systems in the legal domain. Firstly, we establish a conceptual framework for developing explanation methods, representing its key internal components (content, communication and adaptation) and external dependencies (decision-making system, human recipient and domain). Secondly, we propose an explanation method that uses a graph database to enable question-driven explanations and multimedia display. This way, we can tailor the explanation to the user. Finally, we show how our conceptual framework is applicable to a real-world scenario at the Dutch Tax and Customs Administration and implement our explanation method for this scenario.Comment: This is the full version of a demo at the 36th International Conference on Legal Knowledge and Information Systems (JURIX'23

    Modeling Contrastiveness in Argumentation.

    Get PDF
    Modeling contrastive explanations for the use in artificial intelligence (AI) applications is an important research branch within the field of explainable AI (XAI). However, most of the existing contrastive XAI approaches are not based on the findings in the literature from the social sciences on contrastiveness in human reasoning and human explanations. In this work we collect the various types of contrastiveness proposed in the literature and model these with formal argumentation. The result is a variety of argumentation-based methods for contrastive explanations, based on the available literature and applicable in a wide variety of AI-applications

    A Basic Framework for Explanations in Argumentation

    Get PDF
    We discuss explanations for formal (abstract and structured) argumentation-the question of whether and why a certain argument or claim can be accepted (or not) under various extension-based semantics. We introduce a flexible framework, which can act as the basis for many different types of explanations. For example, we can have simple or comprehensive explanations in terms of arguments for or against a claim, arguments that (indirectly) defend a claim, the evidence (knowledge base) that supports or is incompatible with a claim, and so on. We show how different types of explanations can be captured in our basic framework, discuss a real-life application and formally compare our framework to existing work

    Accessible Algorithms for Applied Argumentation

    Get PDF
    Computational argumentation is a promising research area, yet there is a gap between theoretical contributions and practical applications. Bridging this gap could potentially raise interest in this topic even more. We argue that one part of the bridge could be an open-source package of implementations of argumentation algorithms, visualised in a web interface. Therefore we present a new release of PyArg, providing various new argumentation-based functionalities – including multiple generators, a learning environment, implementations of theoretical papers and a showcase of a practical application – in a new interface with improved accessibility
    • …
    corecore