31 research outputs found
Pareto Optimality and Strategy Proofness in Group Argument Evaluation (Extended Version)
An inconsistent knowledge base can be abstracted as a set of arguments and a
defeat relation among them. There can be more than one consistent way to
evaluate such an argumentation graph. Collective argument evaluation is the
problem of aggregating the opinions of multiple agents on how a given set of
arguments should be evaluated. It is crucial not only to ensure that the
outcome is logically consistent, but also satisfies measures of social
optimality and immunity to strategic manipulation. This is because agents have
their individual preferences about what the outcome ought to be. In the current
paper, we analyze three previously introduced argument-based aggregation
operators with respect to Pareto optimality and strategy proofness under
different general classes of agent preferences. We highlight fundamental
trade-offs between strategic manipulability and social optimality on one hand,
and classical logical criteria on the other. Our results motivate further
investigation into the relationship between social choice and argumentation
theory. The results are also relevant for choosing an appropriate aggregation
operator given the criteria that are considered more important, as well as the
nature of agents' preferences
Case-Based Argumentation Framework. Strategies
In agent societies, agents perform complex tasks that require different levels of intelligence and give rise to interactions among them. From these interactions, conflicts of opinion can arise, specially when MAS become adaptive and open with heterogeneous agents dynamically entering in or leaving the system. Therefore, software agents willing to participate in this type of systems will require to include extra capabilities to explicitly represent and generate agreements on top of the simpler ability to interact. In addition, agents can take advantage of previous argumentation experiences to follow dialogue strategies and easily persuade other agents to accept their opinions. Our insight is that CBR can be very useful to manage argumentation in open MAS and devise argumentation strategies based on previous argumentation experiences. To demonstrate the foundations of this suggestion, this report presents the work that we have done to develop case-based argumentation strategies in agent societies. Thus, we propose a case-based argumentation framework for agent societies and define heuristic dialogue strategies based on it. The framework has been implemented and evaluated in a real customer support application.Heras Barberá, SM.; Botti Navarro, VJ.; Julian Inglada, VJ. (2011). Case-Based Argumentation Framework. Strategies. http://hdl.handle.net/10251/1109
Towards a framework for computational persuasion with applications in behaviour change
Persuasion is an activity that involves one party trying to induce another party to believe something or to do something. It is an important and multifaceted human facility. Obviously, sales and marketing is heavily dependent on persuasion. But many other activities involve persuasion such as a doctor persuading a patient to drink less alcohol, a road safety expert persuading drivers to not text while driving, or an online safety expert persuading users of social media sites to not reveal too much personal information online. As computing becomes involved in every sphere of life, so too is persuasion a target for applying computer-based solutions. An automated persuasion system (APS) is a system that can engage in a dialogue with a user (the persuadee) in order to persuade the persuadee to do (or not do) some action or to believe (or not believe) something. To do this, an APS aims to use convincing arguments in order to persuade the persuadee. Computational persuasion is the study of formal models of dialogues involving arguments and counterarguments, of user models, and strategies, for APSs. A promising application area for computational persuasion is in behaviour change. Within healthcare organizations, government agencies, and non-governmental agencies, there is much interest in changing behaviour of particular groups of people away from actions that are harmful to themselves and/or to others around them
Case-Based strategies for argumentation dialogues in agent societies
[EN] In multi-agent systems, agents perform complex tasks that require different levels of intelligence and give rise to interactions among them. From these interactions, conflicts of opinion can arise, especially when these systems become open, with heterogeneous agents dynamically entering or leaving the system. Therefore, agents willing to participate in this type of system will be required to include extra capabilities to explicitly represent and generate agreements on top of the simpler ability to interact. Furthermore, agents in multiagent systems can form societies, which impose social dependencies on them. These dependencies have a decisive influence in the way agents interact and reach agreements. Argumentation provides a natural means of dealing with conflicts of interest and opinion. Agents can reach agreements by engaging in argumentation dialogues with their opponents in a discussion. In addition, agents can take advantage of previous argumentation experiences to follow dialogue strategies and persuade other agents to accept their opinions. Our insight is that case-based reasoning can be very useful to manage argumentation in open multi-agent systems and devise dialogue strategies based on previous argumentation
experiences. To demonstrate the foundations of this suggestion, this paper presents
the work that we have done to develop case-based dialogue strategies in agent societies. Thus, we propose a case-based argumentation framework for agent societies and define heuristic dialogue strategies based on it. The framework has been implemented and evaluated in a real customer support application.This work is supported by the Spanish Government Grants [CONSOLIDER-INGENIO 2010 CSD2007-00022, and TIN2012-36586-C03-01] and by the GVA project [PROMETEO 2008/051].Heras Barberá, SM.; Jordan Prunera, JM.; Botti, V.; Julian Inglada, VJ. (2013). Case-Based strategies for argumentation dialogues in agent societies. Information Sciences. 223:1-30. doi:10.1016/j.ins.2012.10.007S13022
A formal account of dishonesty
International audienceThis paper provides formal accounts of dishonest attitudes of agents. We introduce a propositional multi-modal logic that can represent an agent's belief and intention as well as communication between agents. Using the language, we formulate different categories of dishonesty. We first provide two different definitions of lies and provide their logical properties. We then consider an incentive behind the act of lying and introduce lying with objectives. We subsequently define bullshit, withholding information and half-truths, and analyze their formal properties. We compare different categories of dishonesty in a systematic manner, and examine their connection to deception. We also propose maxims for dishonest communication that agents should ideally try to satisfy