441 research outputs found

    Arguments as Belief Structures: Towards a Toulmin Layout of Doxastic Dynamics?

    Get PDF
    Argumentation is a dialogical attempt to bring about a desired change in the beliefs of another agent – that is, to trigger a specific belief revision process in the mind of such agent. However, so far formal models of belief revision widely neglected any systematic comparison with argumentation theories, to the point that even the simplest argumentation structures cannot be captured within such models. In this essay, we endeavour to bring together argumentation and belief revision in the same formal framework, and to highlight the important role played by Toulmin’s layout of argument in fostering such integration

    An informant-based approach to argument strength in Defeasible Logic Programming

    Get PDF
    This work formalizes an informant-based structured argumentation approach in a multi-agent setting, where the knowledge base of an agent may include information provided by other agents, and each piece of knowledge comes attached with its informant. In that way, arguments are associated with the set of informants corresponding to the information they are built upon. Our approach proposes an informant-based notion of argument strength, where the strength of an argument is determined by the credibility of its informant agents. Moreover, we consider that the strength of an argument is not absolute, but it is relative to the resolution of the conflicts the argument is involved in. In other words, the strength of an argument may vary from one context to another, as it will be determined by comparison to its attacking arguments (respectively, the arguments it attacks). Finally, we equip agents with the means to express reasons for or against the consideration of any piece of information provided by a given informant agent. Consequently, we allow agents to argue about the arguments’ strength through the construction of arguments that challenge (respectively, defeat) or are in favour of their informant agents.Fil: Cohen, Andrea. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Bahía Blanca. Instituto de Ciencias e Ingeniería de la Computación. Universidad Nacional del Sur. Departamento de Ciencias e Ingeniería de la Computación. Instituto de Ciencias e Ingeniería de la Computación; ArgentinaFil: Gottifredi, Sebastián. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Bahía Blanca. Instituto de Ciencias e Ingeniería de la Computación. Universidad Nacional del Sur. Departamento de Ciencias e Ingeniería de la Computación. Instituto de Ciencias e Ingeniería de la Computación; ArgentinaFil: Tamargo, Luciano Héctor. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Bahía Blanca. Instituto de Ciencias e Ingeniería de la Computación. Universidad Nacional del Sur. Departamento de Ciencias e Ingeniería de la Computación. Instituto de Ciencias e Ingeniería de la Computación; ArgentinaFil: García, Alejandro Javier. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Bahía Blanca. Instituto de Ciencias e Ingeniería de la Computación. Universidad Nacional del Sur. Departamento de Ciencias e Ingeniería de la Computación. Instituto de Ciencias e Ingeniería de la Computación; ArgentinaFil: Simari, Guillermo Ricardo. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Bahía Blanca. Instituto de Ciencias e Ingeniería de la Computación. Universidad Nacional del Sur. Departamento de Ciencias e Ingeniería de la Computación. Instituto de Ciencias e Ingeniería de la Computación; Argentin

    Argumentation and data-oriented belief revision: On the two-sided nature of epistemic change

    Get PDF
    This paper aims to bring together two separate threads in the formal study of epistemic change: belief revision and argumentation theories. Belief revision describes the way in which an agent is supposed to change his own mind, while argumentation deals with persuasive strategies employed to change the mind of other agents. Belief change and argumentation are two sides (cognitive and social) of the same epistemic coin. Argumentation theories are therefore incomplete, if they cannot be grounded in belief revision models - and vice versa. Nonetheless, so far the formal treatment of belief revision widely neglected any systematic comparison with argumentation theories. Such lack of integration poses severe limitations to our understanding of epistemic change, and more comprehensive models should instead be devised. After a short critical review of the literature (cf. 1), we outline an alternative model of belief revision whose main claim is the distinction between data and beliefs (cf. 2), and we discuss in detail its expressivity with respect to argumentation (cf. 3): finally, we summarize our conclusions and future works on the interface between belief revision and argumentation (cf. 4)

    Argumentation models and their use in corpus annotation: practice, prospects, and challenges

    Get PDF
    The study of argumentation is transversal to several research domains, from philosophy to linguistics, from the law to computer science and artificial intelligence. In discourse analysis, several distinct models have been proposed to harness argumentation, each with a different focus or aim. To analyze the use of argumentation in natural language, several corpora annotation efforts have been carried out, with a more or less explicit grounding on one of such theoretical argumentation models. In fact, given the recent growing interest in argument mining applications, argument-annotated corpora are crucial to train machine learning models in a supervised way. However, the proliferation of such corpora has led to a wide disparity in the granularity of the argument annotations employed. In this paper, we review the most relevant theoretical argumentation models, after which we survey argument annotation projects closely following those theoretical models. We also highlight the main simplifications that are often introduced in practice. Furthermore, we glimpse other annotation efforts that are not so theoretically grounded but instead follow a shallower approach. It turns out that most argument annotation projects make their own assumptions and simplifications, both in terms of the textual genre they focus on and in terms of adapting the adopted theoretical argumentation model for their own agenda. Issues of compatibility among argument-annotated corpora are discussed by looking at the problem from a syntactical, semantic, and practical perspective. Finally, we discuss current and prospective applications of models that take advantage of argument-annotated corpora

    A logic of defeasible argumentation: Constructing arguments in justification logic

    Get PDF
    In the 1980s, Pollock’s work on default reasons started the quest in the AI community for a formal system of defeasible argumentation. The main goal of this paper is to provide a logic of structured defeasible arguments using the language of justification logic. In this logic, we introduce defeasible justification assertions of the type t:F that read as “t is a defeasible reason that justifies F”. Such formulas are then interpreted as arguments and their acceptance semantics is given in analogy to Dung’s abstract argumentation framework semantics. We show that a large subclass of Dung’s frameworks that we call “warranted” frameworks is a special case of our logic in the sense that (1) Dung’s frameworks can be obtained from justification logic-based theories by focusing on a single aspect of attacks among justification logic arguments and (2) Dung’s warranted frameworks always have multiple justification logic instantiations called “realizations”. We first define a new justification logic that relies on operational semantics for default logic. One of the key features that is absent in standard justification logics is the possibility to weigh different epistemic reasons or pieces of evidence that might conflict with one another. To amend this, we develop a semantics for “defeaters”: conflicting reasons forming a basis to doubt the original conclusion or to believe an opposite statement. This enables us to formalize non-monotonic justifications that prompt extension revision already for normal default theories. Then we present our logic as a system for abstract argumentation with structured arguments. The format of conflicting reasons overlaps with the idea of attacks between arguments to the extent that it is possible to define all the standard notions of argumentation framework extensions. Using the definitions of extensions, we establish formal correspondence between Dung’s original argumentation semantics and our operational semantics for default theories. One of the results shows that the notorious attack cycles from abstract argumentation cannot always be realized as justification logic default theories

    Anchored narratives and dialectical argumentation

    Get PDF
    Trying criminal cases is hard. The problem faced by a judge in court can be phrased in a deceptively simple way though, as follows: in order to come to a verdict, a judge has to apply the rules of law to the facts of the case. In a naïve and often criticized model of legal decision making (reminding of the bouche de la loi view on judges), the verdict is determined by applying the rules of law that match the case facts. This naïve model of legal decision making can be referred to as the subsumption model. A problem with the subsumption model is that neither the rules of law nor the case facts are available to the legal decision maker in a sufficiently well-structured form to make the processes of matching and applying a trivial matter. First, there is the problem of determining what the rules of law and the case facts are. Neither the rules nor the facts are presented to the judge in a precise and unambiguous way. A judge has to interpret the available information about the rules of law and the case facts. Second, even if the rules of law and the case facts would be determined, the processes of matching and applying can be problematic. It can for instance be undetermined whether some case fact falls under a particular rule's condition. Additional classificatory rules are then needed. In general, it can be the case that applying the rules of law leads to conflicting verdicts about the case at hand, or to no verdict at all. In the latter situation, it is to the judge's discretion to fill the gap, in the former, he has to resolve the conflict

    A PRISMA-driven systematic mapping study on system assurance weakeners

    Full text link
    Context: An assurance case is a structured hierarchy of claims aiming at demonstrating that a given mission-critical system supports specific requirements (e.g., safety, security, privacy). The presence of assurance weakeners (i.e., assurance deficits, logical fallacies) in assurance cases reflects insufficient evidence, knowledge, or gaps in reasoning. These weakeners can undermine confidence in assurance arguments, potentially hindering the verification of mission-critical system capabilities. Objectives: As a stepping stone for future research on assurance weakeners, we aim to initiate the first comprehensive systematic mapping study on this subject. Methods: We followed the well-established PRISMA 2020 and SEGRESS guidelines to conduct our systematic mapping study. We searched for primary studies in five digital libraries and focused on the 2012-2023 publication year range. Our selection criteria focused on studies addressing assurance weakeners at the modeling level, resulting in the inclusion of 39 primary studies in our systematic review. Results: Our systematic mapping study reports a taxonomy (map) that provides a uniform categorization of assurance weakeners and approaches proposed to manage them at the modeling level. Conclusion: Our study findings suggest that the SACM (Structured Assurance Case Metamodel) -- a standard specified by the OMG (Object Management Group) -- may be the best specification to capture structured arguments and reason about their potential assurance weakeners

    Alternativas de ataque y soporte para sistemas argumentativos basados en reglas

    Get PDF
    Esta línea de investigación tiene por objetivo conseguir una mejora en los Sistemas Argumentativos Basados en Reglas (SABR) mediante la incorporación de elementos presentes en formalismos de argumentación clásica. Una crítica usualmente realizada a los SABR es que determinados patrones de razonamiento argumentativo estudiados en otras áreas, y que constituyen importantes aportes a la argumentación, no son considerados en su definición formal. Esta investigación tiene como objetivo incorporar dichos aportes a los SABR, permitiendo así mejorar los SABR así como también sus respectivas implementaciones. Finalmente, estas mejoras representarían un avance significativo para los Sistemas Argumentativos dentro del área de Inteligencia Artificial y Ciencias de la Computación.Eje: Agentes y sistemas inteligentesRed de Universidades con Carreras en Informática (RedUNCI

    In memoriam Douglas N. Walton: the influence of Doug Walton on AI and law

    Get PDF
    Doug Walton, who died in January 2020, was a prolific author whose work in informal logic and argumentation had a profound influence on Artificial Intelligence, including Artificial Intelligence and Law. He was also very interested in interdisciplinary work, and a frequent and generous collaborator. In this paper seven leading researchers in AI and Law, all past programme chairs of the International Conference on AI and Law who have worked with him, describe his influence on their work
    corecore