87 research outputs found

    A Comparative Study of Ranking-based Semantics for Abstract Argumentation

    Get PDF
    Argumentation is a process of evaluating and comparing a set of arguments. A way to compare them consists in using a ranking-based semantics which rank-order arguments from the most to the least acceptable ones. Recently, a number of such semantics have been proposed independently, often associated with some desirable properties. However, there is no comparative study which takes a broader perspective. This is what we propose in this work. We provide a general comparison of all these semantics with respect to the proposed properties. That allows to underline the differences of behavior between the existing semantics.Comment: Proceedings of the 30th AAAI Conference on Artificial Intelligence (AAAI-2016), Feb 2016, Phoenix, United State

    Some Supplementaries to The Counting Semantics for Abstract Argumentation

    Full text link
    Dung's abstract argumentation framework consists of a set of interacting arguments and a series of semantics for evaluating them. Those semantics partition the powerset of the set of arguments into two classes: extensions and non-extensions. In order to reason with a specific semantics, one needs to take a credulous or skeptical approach, i.e. an argument is eventually accepted, if it is accepted in one or all extensions, respectively. In our previous work \cite{ref-pu2015counting}, we have proposed a novel semantics, called \emph{counting semantics}, which allows for a more fine-grained assessment to arguments by counting the number of their respective attackers and defenders based on argument graph and argument game. In this paper, we continue our previous work by presenting some supplementaries about how to choose the damaging factor for the counting semantics, and what relationships with some existing approaches, such as Dung's classical semantics, generic gradual valuations. Lastly, an axiomatic perspective on the ranking semantics induced by our counting semantics are presented.Comment: 8 pages, 3 figures, ICTAI 201

    Inferring Attack Relations for Gradual Semantics

    Get PDF
    Peer reviewedPublisher PD

    Comparing and Extending the Use of Defeasible Argumentation with Quantitative Data in Real-World Contexts

    Get PDF
    Dealing with uncertain, contradicting, and ambiguous information is still a central issue in Artificial Intelligence (AI). As a result, many formalisms have been proposed or adapted so as to consider non-monotonicity. A non-monotonic formalism is one that allows the retraction of previous conclusions or claims, from premises, in light of new evidence, offering some desirable flexibility when dealing with uncertainty. Among possible options, knowledge-base, non-monotonic reasoning approaches have seen their use being increased in practice. Nonetheless, only a limited number of works and researchers have performed any sort of comparison among them. This research article focuses on evaluating the inferential capacity of defeasible argumentation, a formalism particularly envisioned for modelling non-monotonic reasoning. In addition to this, fuzzy reasoning and expert systems, extended for handling non-monotonicity of reasoning, are selected and employed as baselines, due to their vast and accepted use within the AI community. Computational trust was selected as the domain of application of such models. Trust is an ill-defined construct, hence, reasoning applied to the inference of trust can be seen as non-monotonic. Inference models were designed to assign trust scalars to editors of the Wikipedia project. Scalars assigned to recognised trustworthy editors provided the basis for the analysis of the models’ inferential capacity according to evaluation metrics from the domain of computational trust. In particular, argument-based models demonstrated more robustness than those built upon the baselines despite the knowledge bases or datasets employed. This study contributes to the body of knowledge through the exploitation of defeasible argumentation and its comparison to similar approaches. It provides publicly implementations for the designed models of inference, which might be a useful aid to scholars interested in performing non-monotonic reasoning activities. It adds to previous works, empirically enhancing the generalisability of defeasible argumentation as a compelling approach to reason with quantitative data and uncertain knowledge

    Dealing with similarity in argumentation

    Get PDF
    Le raisonnement argumentatif est basé sur la justification d'une conclusion plausible par des arguments en sa faveur. L'argumentation est un modèle prometteur pour raisonner avec des connaissances incertaines ou incohérentes, ou, plus généralement de sens communs. Ce modèle est basé sur la construction d'arguments et de contre-arguments, la comparaison de ces arguments et enfin l'évaluation de la force de chacun d'entre eux. Dans cette thèse, nous avons abordé la notion de similarité entre arguments. Nous avons étudié deux aspects : comment la mesurer et comment la prendre en compte dans l'évaluation des forces. Concernant le premier aspect, nous nous sommes intéressés aux arguments logiques, plus précisément à des arguments construits à partir de bases de connaissances propositionnelles. Nous avons commencé par proposer un ensemble d'axiomes qu'une mesure de similarité entre des arguments logiques doit satisfaire. Ensuite, nous avons proposé différentes mesures et étudié leurs propriétés. La deuxième partie de la thèse a consisté à définir les fondements théoriques qui décrivent les principes et les processus impliqués dans la définition d'une méthode d'évaluation des arguments prenant en compte la similarité. Une telle méthode calcule la force d'un argument sur la base de forces de ses attaquants, des similarités entre eux, et d'un poids initial de l'argument. Formellement, une méthode d'évaluation est définie par trois fonctions dont une, nommée "fonction d'ajustement", qui s'occupe de réajuster les forces des attaquants en fonction de leur similarité. Nous avons proposé des propriétés que doivent satisfaire les trois fonctions, ensuite nous avons défini une large famille de méthodes et étudié leurs propriétés. Enfin, nous avons défini différentes fonctions d'ajustement, montrant ainsi que différentes stratégies peuvent être suivies pour contourner la redondance pouvant exister entre les attaquants d'un argument.Argumentative reasoning is based on justifying a plausible conclusion with arguments in its favour. Argumentation is a promising model for reasoning with uncertain or inconsistent knowledge, or, more generally, common sense. This model is based on the construction of arguments and counter-arguments, the comparison of these arguments and finally the evaluation of the strength of each of them. In this thesis, we have tackled the notion of similarity between arguments. We have studied two aspects: how to measure it and how to take it into account in the evaluation of strengths. With regards to the first aspect, we were interested in logical arguments, more precisely in arguments built from propositional knowledge bases. We started by proposing a set of axioms that a similarity measure between logical arguments must satisfy. Then, we proposed different measures and studied their properties. The second part of the thesis was focused on defining the theoretical foundations that describe the principles and processes involved in the definition of an evaluation method for arguments, which takes similarity into account. Such a method computes the strength of an argument based on the strengths of its attackers, the similarities between them, and an initial weight of the argument. Formally, an evaluation method is defined by three functions, one of which (called the adjustment function) is concerned with readjusting the strengths of the attackers according to their similarity. We have proposed properties that the three functions must satisfy, after which we have defined a large family of methods and studied their properties. At last, we have defined different adjustment functions, showing that different strategies can be applied to avoid the redundancy that can exist between the attackers of an argument

    Evaluating the Impact of Defeasible Argumentation as a Modelling Technique for Reasoning under Uncertainty

    Get PDF
    Limited work exists for the comparison across distinct knowledge-based approaches in Artificial Intelligence (AI) for non-monotonic reasoning, and in particular for the examination of their inferential and explanatory capacity. Non-monotonicity, or defeasibility, allows the retraction of a conclusion in the light of new information. It is a similar pattern to human reasoning, which draws conclusions in the absence of information, but allows them to be corrected once new pieces of evidence arise. Thus, this thesis focuses on a comparison of three approaches in AI for implementation of non-monotonic reasoning models of inference, namely: expert systems, fuzzy reasoning and defeasible argumentation. Three applications from the fields of decision-making in healthcare and knowledge representation and reasoning were selected from real-world contexts for evaluation: human mental workload modelling, computational trust modelling, and mortality occurrence modelling with biomarkers. The link between these applications comes from their presumptively non-monotonic nature. They present incomplete, ambiguous and retractable pieces of evidence. Hence, reasoning applied to them is likely suitable for being modelled by non-monotonic reasoning systems. An experiment was performed by exploiting six deductive knowledge bases produced with the aid of domain experts. These were coded into models built upon the selected reasoning approaches and were subsequently elicited with real-world data. The numerical inferences produced by these models were analysed according to common metrics of evaluation for each field of application. For the examination of explanatory capacity, properties such as understandability, extensibility, and post-hoc interpretability were meticulously described and qualitatively compared. Findings suggest that the variance of the inferences produced by expert systems and fuzzy reasoning models was higher, highlighting poor stability. In contrast, the variance of argument-based models was lower, showing a superior stability of its inferences across different system configurations. In addition, when compared in a context with large amounts of conflicting information, defeasible argumentation exhibited a stronger potential for conflict resolution, while presenting robust inferences. An in-depth discussion of the explanatory capacity showed how defeasible argumentation can lead to the construction of non-monotonic models with appealing properties of explainability, compared to those built with expert systems and fuzzy reasoning. The originality of this research lies in the quantification of the impact of defeasible argumentation. It illustrates the construction of an extensive number of non-monotonic reasoning models through a modular design. In addition, it exemplifies how these models can be exploited for performing non-monotonic reasoning and producing quantitative inferences in real-world applications. It contributes to the field of non-monotonic reasoning by situating defeasible argumentation among similar approaches through a novel empirical comparison

    Graduality in Probabilistic Argumentation Frameworks

    Get PDF
    Gradual semantics are methods that evaluate overall strengths of individual arguments in graphs. In this paper, we investigate gradual semantics for extended frameworks in which probabilities are used to quantify the uncertainty about arguments and attacks belonging to the graph. We define the likelihoods of an argument’s possible strengths when facing uncertainty about the topology of the argumentation framework. We also define an approach to compare the strengths of arguments in this probabilistic setting. Finally, we propose a method to calculate the overall strength of each argument in the framework, and we evaluate this method against a set of principles

    Are ranking semantics sensitive to the notion of core?

    Get PDF
    International audienceIn this paper, we study the impact of two notions of core on the output of ranking semantics in logical argumentation frameworks. We consider the existential rules fragment, a language widely used in Semantic Web and Ontology Based Data Access applications. Using burden semantics as example we show how some ranking semantics yield different outputs on the argumentation graph and its cores. We extend existing results in the literature regarding core equivalences on logical argumentation frameworks and propose the first formal characterisation of core-induced modification for a class of ranking semantics satisfying given postulates
    • …
    corecore