20 research outputs found

    Arguing Using Opponent Models

    Get PDF
    Peer reviewedPostprin

    Analysis of Dialogical Argumentation via Finite State Machines

    Get PDF
    Dialogical argumentation is an important cognitive activity by which agents exchange arguments and counterarguments as part of some process such as discussion, debate, persuasion and negotiation. Whilst numerous formal systems have been proposed, there is a lack of frameworks for implementing and evaluating these proposals. First-order executable logic has been proposed as a general framework for specifying and analysing dialogical argumentation. In this paper, we investigate how we can implement systems for dialogical argumentation using propositional executable logic. Our approach is to present and evaluate an algorithm that generates a finite state machine that reflects a propositional executable logic specification for a dialogical argumentation together with an initial state. We also consider how the finite state machines can be analysed, with the minimax strategy being used as an illustration of the kinds of empirical analysis that can be undertaken.Comment: 10 page

    Two dimensional uncertainty in persuadee modelling in argumentation

    Get PDF
    When attempting to persuade an agent to believe (or disbelieve) an argument, it can be advantageous for the persuader to have a model of the persuadee. Models have been proposed for taking account of what arguments the persuadee believes and these can be used in a strategy for persuasion. However, there can be uncertainty as to the accuracy of such models. To address this issue, this paper introduces a two-dimensional model that accounts for the uncertainty of belief by a persuadee and for the confidence in that uncertainty evaluation. This gives a better modeling for using lotteries so that the outcomes involve statements about what the user believes/disbelieves, and the confidence value is the degree to which the user does indeed hold those outcomes (and this is a more refined and more natural modeling than found in [19]). This framework is also extended with a modelling of the risk of disengagement by the persuadee

    Towards a framework for computational persuasion with applications in behaviour change

    Get PDF
    Persuasion is an activity that involves one party trying to induce another party to believe something or to do something. It is an important and multifaceted human facility. Obviously, sales and marketing is heavily dependent on persuasion. But many other activities involve persuasion such as a doctor persuading a patient to drink less alcohol, a road safety expert persuading drivers to not text while driving, or an online safety expert persuading users of social media sites to not reveal too much personal information online. As computing becomes involved in every sphere of life, so too is persuasion a target for applying computer-based solutions. An automated persuasion system (APS) is a system that can engage in a dialogue with a user (the persuadee) in order to persuade the persuadee to do (or not do) some action or to believe (or not believe) something. To do this, an APS aims to use convincing arguments in order to persuade the persuadee. Computational persuasion is the study of formal models of dialogues involving arguments and counterarguments, of user models, and strategies, for APSs. A promising application area for computational persuasion is in behaviour change. Within healthcare organizations, government agencies, and non-governmental agencies, there is much interest in changing behaviour of particular groups of people away from actions that are harmful to themselves and/or to others around them

    Case-Based strategies for argumentation dialogues in agent societies

    Full text link
    [EN] In multi-agent systems, agents perform complex tasks that require different levels of intelligence and give rise to interactions among them. From these interactions, conflicts of opinion can arise, especially when these systems become open, with heterogeneous agents dynamically entering or leaving the system. Therefore, agents willing to participate in this type of system will be required to include extra capabilities to explicitly represent and generate agreements on top of the simpler ability to interact. Furthermore, agents in multiagent systems can form societies, which impose social dependencies on them. These dependencies have a decisive influence in the way agents interact and reach agreements. Argumentation provides a natural means of dealing with conflicts of interest and opinion. Agents can reach agreements by engaging in argumentation dialogues with their opponents in a discussion. In addition, agents can take advantage of previous argumentation experiences to follow dialogue strategies and persuade other agents to accept their opinions. Our insight is that case-based reasoning can be very useful to manage argumentation in open multi-agent systems and devise dialogue strategies based on previous argumentation experiences. To demonstrate the foundations of this suggestion, this paper presents the work that we have done to develop case-based dialogue strategies in agent societies. Thus, we propose a case-based argumentation framework for agent societies and define heuristic dialogue strategies based on it. The framework has been implemented and evaluated in a real customer support application.This work is supported by the Spanish Government Grants [CONSOLIDER-INGENIO 2010 CSD2007-00022, and TIN2012-36586-C03-01] and by the GVA project [PROMETEO 2008/051].Heras Barberá, SM.; Jordan Prunera, JM.; Botti, V.; Julian Inglada, VJ. (2013). Case-Based strategies for argumentation dialogues in agent societies. Information Sciences. 223:1-30. doi:10.1016/j.ins.2012.10.007S13022

    Syntactic Reasoning with Conditional Probabilities in Deductive Argumentation

    Get PDF
    Evidence from studies, such as in science or medicine, often corresponds to conditional probability statements. Furthermore, evidence can conflict, in particular when coming from multiple studies. Whilst it is natural to make sense of such evidence using arguments, there is a lack of a systematic formalism for representing and reasoning with conditional probability statements in computational argumentation. We address this shortcoming by providing a formalization of conditional probabilistic argumentation based on probabilistic conditional logic. We provide a semantics and a collection of comprehensible inference rules that give different insights into evidence. We show how arguments constructed from proofs and attacks between them can be analyzed as arguments graphs using dialectical semantics and via the epistemic approach to probabilistic argumentation. Our approach allows for a transparent and systematic way of handling uncertainty that often arises in evidence

    Mixing Dyadic and Deliberative Opinion Dynamics in an Agent-Based Model of Group Decision-Making

    Get PDF
    International audienceIn this article, we propose an agent-based model of opinion diffusion and voting where influence among individuals and deliberation in a group are mixed. The model is inspired from social modeling, as it describes an iterative process of collective decision-making that repeats a series of interindividual influences and collective deliberation steps, and studies the evolution of opinions and decisions in a group. It also aims at founding a comprehensive model to describe collective decision-making as a combination of two different paradigms: argumentation theory and ABM-influence models, which are not obvious to combine as a formal link between them is required. In our model, we find that deliberation, through the exchange of arguments, reduces the variance of opinions and the proportion of extremists in a population as long as not too much deliberation takes place in the decision processes. Additionally, if we define the correct collective decisions in the system in terms of the arguments that should be accepted, allowing for more deliberation favors convergence towards the correct decisions

    Dynamic epistemic logics for abstract argumentation

    Get PDF
    AbstractThis paper introduces a multi-agent dynamic epistemic logic for abstract argumentation. Its main motivation is to build a general framework for modelling the dynamics of a debate, which entails reasoning about goals, beliefs, as well as policies of communication and information update by the participants. After locating our proposal and introducing the relevant tools from abstract argumentation, we proceed to build a three-tiered logical approach. At the first level, we use the language of propositional logic to encode states of a multi-agent debate. This language allows to specify which arguments any agent is aware of, as well as their subjective justification status. We then extend our language and semantics to that of epistemic logic, in order to model individuals' beliefs about the state of the debate, which includes uncertainty about the information available to others. As a third step, we introduce a framework of dynamic epistemic logic and its semantics, which is essentially based on so-called event models with factual change. We provide completeness results for a number of systems and show how existing formalisms for argumentation dynamics and unquantified uncertainty can be reduced to their semantics. The resulting framework allows reasoning about subtle epistemic and argumentative updates—such as the effects of different levels of trust in a source—and more in general about the epistemic dimensions of strategic communication
    corecore