688 research outputs found

    Introduction to structured argumentation

    Get PDF
    In abstract argumentation, each argument is regarded as atomic. There is no internal structure to an argument. Also, there is no specification of what is an argument or an attack. They are assumed to be given. This abstract perspective provides many advantages for studying the nature of argumentation, but it does not cover all our needs for understanding argumentation or for building tools for supporting or undertaking argumentation. If we want a more detailed formalisation of arguments than is available with abstract argumentation, we can turn to structured argumentation, which is the topic of this special issue of Argument and Computation. In structured argumentation, we assume a formal language for representing knowledge, and specifying how arguments and counterarguments can be constructed from that knowledge. An argument is then said to be structured in the sense that normally the premises and claim of the argument are made explicit, and the relationship between the premises and claim is formally defined (for instance using logical entailment).In this introduction, we provide a brief overview of the approaches covered in this special issue on structured argumentation.Fil: Besnard, Philippe. Université Paul Sabatier; FranciaFil: García, Alejandro Javier. Universidad Nacional del Sur. Departamento de Ciencias e Ingeniería de la Computación; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas; ArgentinaFil: Hunter, Anthony. University College London; Estados UnidosFil: Modgil, Sanjay. Kings College London; Reino UnidoFil: Prakken, Henry. University of Utrecht; Países Bajos. University of Groningen; Países BajosFil: Simari, Guillermo Ricardo. Universidad Nacional del Sur. Departamento de Ciencias e Ingeniería de la Computación; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas; ArgentinaFil: Toni, Francesca. Imperial College London; Reino Unid

    In memoriam Douglas N. Walton: the influence of Doug Walton on AI and law

    Get PDF
    Doug Walton, who died in January 2020, was a prolific author whose work in informal logic and argumentation had a profound influence on Artificial Intelligence, including Artificial Intelligence and Law. He was also very interested in interdisciplinary work, and a frequent and generous collaborator. In this paper seven leading researchers in AI and Law, all past programme chairs of the International Conference on AI and Law who have worked with him, describe his influence on their work

    PyArg for solving and explaining rgumentation in python

    Get PDF
    We introduce PyArg, a Python-based solver and explainer for both abstract argumentation and ASPIC+. A large variety of extension-based semantics allows for flexible evaluation and several explanation functions are available

    Stability and Relevance in Incomplete Argumentation Frameworks

    Get PDF
    We explore the computational complexity of stability and relevance in incomplete argumentation frameworks (IAFs), abstract argumentation frameworks that encode qualitative uncertainty by distinguishing between certain and uncertain arguments and attacks. IAFs can be specified by, e.g., making uncertain arguments or attacks certain; the justification status of arguments in an IAF is determined on the basis of the certain arguments and attacks. An argument is stable if its justification status is the same in all specifications of the IAF. For arguments that are not stable in an IAF, the relevance problem is of interest: which uncertain arguments or attacks should be investigated for the argument to become stable? We redefine stability and define relevance for IAFs and study their complexity

    Thirty years of Artificial Intelligence and Law:the second decade

    Get PDF
    The first issue of Artificial Intelligence and Law journal was published in 1992. This paper provides commentaries on nine significant papers drawn from the Journal’s second decade. Four of the papers relate to reasoning with legal cases, introducing contextual considerations, predicting outcomes on the basis of natural language descriptions of the cases, comparing different ways of representing cases, and formalising precedential reasoning. One introduces a method of analysing arguments that was to become very widely used in AI and Law, namely argumentation schemes. Two relate to ontologies for the representation of legal concepts and two take advantage of the increasing availability of legal corpora in this decade, to automate document summarisation and for the mining of arguments

    Justification in Case-Based Reasoning

    Get PDF
    The explanation and justification of decisions is an important subject in contemporary data-driven automated methods. Case-based argumentation has been proposed as the formal background for the explanation of data-driven automated decision making. In particular, a method was developed in recent work based on the theory of precedential constraint which reasons from a case base, given by the training data of the machine learning system, to produce a justification for the outcome of a focus case. An important role is played in this method by the notions of citability and compensation, and in the present work we develop these in more detail. Special attention is paid to the notion of compensation; we formally specify the notion and identify several of its desirable properties. These considerations reveal a refined formal perspective on the explanation method as an extension of the theory of precedential constraint with a formal notion of justification

    Arguing about the existence of conflicts

    Get PDF
    In this paper we formalise a meta-argumentation framework as an ASPIC+ extension which enables reasoning about conflicts between formulae of the argumentation language. The result is a standard abstract argumentation framework that can be evaluated via grounded semantics

    Justifications derived from inconsistent case bases using authoritativeness

    Get PDF
    Post hoc analyses are used to provide interpretable explanations for machine learning predictions made by an opaque model. We modify a top-level model (AF-CBA) that uses case-based argumentation as such a post hoc analysis. AF-CBA justifies model predictions on the basis of an argument graph constructed using precedents from a case base. The effectiveness of this approach is limited when faced with an inconsistent case base, which are frequently encountered in practice. Reducing an inconsistent case base to a consistent subset is possible but undesirable. By altering the approach’s definition of best precedent to include an additional criterion based on an expression of authoritativeness, we allow AF-CBA to handle inconsistent case bases. We experiment with four different expressions of authoritativeness using three different data sets in order to evaluate their effect on the explanations generated in terms of the average number of precedents and the number of inconsistent a fortiori forcing relations
    corecore