3,256 research outputs found

    A Comparative Study of Ranking-based Semantics for Abstract Argumentation

    Get PDF
    Argumentation is a process of evaluating and comparing a set of arguments. A way to compare them consists in using a ranking-based semantics which rank-order arguments from the most to the least acceptable ones. Recently, a number of such semantics have been proposed independently, often associated with some desirable properties. However, there is no comparative study which takes a broader perspective. This is what we propose in this work. We provide a general comparison of all these semantics with respect to the proposed properties. That allows to underline the differences of behavior between the existing semantics.Comment: Proceedings of the 30th AAAI Conference on Artificial Intelligence (AAAI-2016), Feb 2016, Phoenix, United State

    Coalitional games for abstract argumentation

    Get PDF
    International audienceIn this work we address the issue of the uncertainty faced by a user participating in multiagent debate. We propose a way to compute the relative relevance of arguments for such a user, by merging the classical argumentation framework proposed in [5] into a game theoretic coalitional setting, where the worth of a collection of arguments (opinions) can be seen as the combination of the information concerning the defeat relation and the preferences over arguments of a " user ". Via a property-driven approach, we show that the Shapley value [15] for coalitional games defined over an argumentation framework, can be applied to resume all the information about the worth of opinions into an attribution of relevance for the single arguments. We also prove that, for a large family of (coalitional) argumentation frameworks, the Shapley value can be easily computed

    Argumentation accelerated reinforcement learning

    Get PDF
    Reinforcement Learning (RL) is a popular statistical Artificial Intelligence (AI) technique for building autonomous agents, but it suffers from the curse of dimensionality: the computational requirement for obtaining the optimal policies grows exponentially with the size of the state space. Integrating heuristics into RL has proven to be an effective approach to combat this curse, but deriving high-quality heuristics from people’s (typically conflicting) domain knowledge is challenging, yet it received little research attention. Argumentation theory is a logic-based AI technique well-known for its conflict resolution capability and intuitive appeal. In this thesis, we investigate the integration of argumentation frameworks into RL algorithms, so as to improve the convergence speed of RL algorithms. In particular, we propose a variant of Value-based Argumentation Framework (VAF) to represent domain knowledge and to derive heuristics from this knowledge. We prove that the heuristics derived from this framework can effectively instruct individual learning agents as well as multiple cooperative learning agents. In addition,we propose the Argumentation Accelerated RL (AARL) framework to integrate these heuristics into different RL algorithms via Potential Based Reward Shaping (PBRS) techniques: we use classical PBRS techniques for flat RL (e.g. SARSA(λ)) based AARL, and propose a novel PBRS technique for MAXQ-0, a hierarchical RL (HRL) algorithm, so as to implement HRL based AARL. We empirically test two AARL implementations — SARSA(λ)-based AARL and MAXQ-based AARL — in multiple application domains, including single-agent and multi-agent learning problems. Empirical results indicate that AARL can improve the convergence speed of RL, and can also be easily used by people that have little background in Argumentation and RL.Open Acces

    Labeled bipolar argumentation frameworks

    Get PDF
    An essential part of argumentation-based reasoning is to identify arguments in favor and against a statement or query, select the acceptable ones, and then determine whether or not the original statement should be accepted. We present here an abstract framework that considers two independent forms of argument interaction-support and conflict-and is able to represent distinctive information associated with these arguments. This information can enable additional actions such as: (i) a more in-depth analysis of the relations between the arguments; (ii) a representation of the user's posture to help in focusing the argumentative process, optimizing the values of attributes associated with certain arguments; and (iii) an enhancement of the semantics taking advantage of the availability of richer information about argument acceptability. Thus, the classical semantic definitions are enhanced by analyzing a set of postulates they satisfy. Finally, a polynomial-time algorithm to perform the labeling process is introduced, in which the argument interactions are considered.Fil: Escañuela Gonzalez, Melisa Gisselle. Universidad Nacional de Santiago del Estero; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Tucumán; ArgentinaFil: Budan, Maximiliano Celmo David. Universidad Nacional de Santiago del Estero; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Tucumán; ArgentinaFil: Simari, Gerardo. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Bahía Blanca. Instituto de Ciencias e Ingeniería de la Computación. Universidad Nacional del Sur. Departamento de Ciencias e Ingeniería de la Computación. Instituto de Ciencias e Ingeniería de la Computación; ArgentinaFil: Simari, Guillermo Ricardo. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Bahía Blanca. Instituto de Ciencias e Ingeniería de la Computación. Universidad Nacional del Sur. Departamento de Ciencias e Ingeniería de la Computación. Instituto de Ciencias e Ingeniería de la Computación; Argentin

    Evaluating the Impact of Defeasible Argumentation as a Modelling Technique for Reasoning under Uncertainty

    Get PDF
    Limited work exists for the comparison across distinct knowledge-based approaches in Artificial Intelligence (AI) for non-monotonic reasoning, and in particular for the examination of their inferential and explanatory capacity. Non-monotonicity, or defeasibility, allows the retraction of a conclusion in the light of new information. It is a similar pattern to human reasoning, which draws conclusions in the absence of information, but allows them to be corrected once new pieces of evidence arise. Thus, this thesis focuses on a comparison of three approaches in AI for implementation of non-monotonic reasoning models of inference, namely: expert systems, fuzzy reasoning and defeasible argumentation. Three applications from the fields of decision-making in healthcare and knowledge representation and reasoning were selected from real-world contexts for evaluation: human mental workload modelling, computational trust modelling, and mortality occurrence modelling with biomarkers. The link between these applications comes from their presumptively non-monotonic nature. They present incomplete, ambiguous and retractable pieces of evidence. Hence, reasoning applied to them is likely suitable for being modelled by non-monotonic reasoning systems. An experiment was performed by exploiting six deductive knowledge bases produced with the aid of domain experts. These were coded into models built upon the selected reasoning approaches and were subsequently elicited with real-world data. The numerical inferences produced by these models were analysed according to common metrics of evaluation for each field of application. For the examination of explanatory capacity, properties such as understandability, extensibility, and post-hoc interpretability were meticulously described and qualitatively compared. Findings suggest that the variance of the inferences produced by expert systems and fuzzy reasoning models was higher, highlighting poor stability. In contrast, the variance of argument-based models was lower, showing a superior stability of its inferences across different system configurations. In addition, when compared in a context with large amounts of conflicting information, defeasible argumentation exhibited a stronger potential for conflict resolution, while presenting robust inferences. An in-depth discussion of the explanatory capacity showed how defeasible argumentation can lead to the construction of non-monotonic models with appealing properties of explainability, compared to those built with expert systems and fuzzy reasoning. The originality of this research lies in the quantification of the impact of defeasible argumentation. It illustrates the construction of an extensive number of non-monotonic reasoning models through a modular design. In addition, it exemplifies how these models can be exploited for performing non-monotonic reasoning and producing quantitative inferences in real-world applications. It contributes to the field of non-monotonic reasoning by situating defeasible argumentation among similar approaches through a novel empirical comparison

    An Investigation of Argumentation Theory for the Prediction of Survival in Elderly Using Biomarkers

    Get PDF
    Research on the discovery, classification and validation of biological markers, or biomarkers, have grown extensively in the last decades. Newfound and correctly validated biomarkers have great potential as prognostic and diagnostic indicators, but present a complex relationship with pertinent endpoints such as survival or other diseases manifestations. This research proposes the use of computational argumentation theory as a starting point for the resolution of this problem for cases in which a large amount of data is unavailable. A knowledge-base containing 51 different biomarkers and their association with mortality risks in elderly was provided by a clinician. It was applied for the construction of several argument-based models capable of inferring survival or not. The prediction accuracy and sensitivity of these models were investigated, showing how these are in line with inductive classification using decision trees with limited data
    corecore