6 research outputs found

    Pressure and Argumentation in Public Controversies:A Dialogical Perspective

    Get PDF
    When can exerting pressure in a public controversy promote reasonable outcomes, and when is it rather a hindrance? We show how negotiation and persuasion dialogue can be intertwined. Then, we examine in what ways one can in a public controversy exert pressure on others through sanctions or rewards. Finally, we discuss from the viewpoints of persuasion and negotiation whether and, if so, how pressure hinders the achievement of a reasonable outcome.Quand le fait de faire pression dans une controverse publique peut-il promouvoir des résultats raisonnables, et quand est-ce plutôt un obstacle? Nous montrons comment la négociation et le dialogue de persuasion peuvent être entrelacés. Ensuite, nous examinons de quelle manière on peut, dans une controverse publique, faire pression sur les autres par le biais de sanctions ou de récompenses. Enfin, du point de vue de la persuasion et de la négociation, nous discutons de la question de savoir si, et si oui, comment la pression empêche la réalisation d’un résultat raisonnable

    Towards a framework for computational persuasion with applications in behaviour change

    Get PDF
    Persuasion is an activity that involves one party trying to induce another party to believe something or to do something. It is an important and multifaceted human facility. Obviously, sales and marketing is heavily dependent on persuasion. But many other activities involve persuasion such as a doctor persuading a patient to drink less alcohol, a road safety expert persuading drivers to not text while driving, or an online safety expert persuading users of social media sites to not reveal too much personal information online. As computing becomes involved in every sphere of life, so too is persuasion a target for applying computer-based solutions. An automated persuasion system (APS) is a system that can engage in a dialogue with a user (the persuadee) in order to persuade the persuadee to do (or not do) some action or to believe (or not believe) something. To do this, an APS aims to use convincing arguments in order to persuade the persuadee. Computational persuasion is the study of formal models of dialogues involving arguments and counterarguments, of user models, and strategies, for APSs. A promising application area for computational persuasion is in behaviour change. Within healthcare organizations, government agencies, and non-governmental agencies, there is much interest in changing behaviour of particular groups of people away from actions that are harmful to themselves and/or to others around them

    Formal Handling of Threats and Rewards in a Negotiation Dialogue

    No full text
    International audienceArgumentation plays a key role in finding a compromise during a negotiation dialogue. It may lead an agent to change its goals/preferences and force it to respond in a particular way. Two types of arguments are mainly used for that purpose: threats and rewards. For example, if an agent receives a threat, this agent may accept the offer even if it isnot fully “acceptable” for it (because other wise really important goals would be threatened).The contribution of this paper is twofold. On the one hand,a logical setting that handles these two types of argumentsis provided. More precisely, logical definitions of threats and rewards are proposed together with their weighting systems.These definitions take into account that negotiation dialogues involve not only agents’ beliefs (of various strengths),but also their goals (having maybe different priorities), as well as the beliefs about the goals of other agents.On the other hand, a “simple” protocol for handling sucharguments in a negotiation dialogue is given. This protocol shows when such arguments can be presented, how they are handled, and how they lead agents to change their goals and behaviors

    Formal Handling of Threats and Rewards in a Negotiation Dialogue

    No full text

    Modified bargaining protocols for automated negotiation in open multi-agent systems

    Get PDF
    Current research in multi-agent systems (MAS) has advanced to the development of open MAS, which are characterized by the heterogeneity of agents, free exit/entry and decentralized control. Conflicts of interest among agents are inevitable, and hence automated negotiation to resolve them is one of the promising solutions. This thesis studies three modifications on alternating-offer bargaining protocols for automated negotiation in open MAS. The long-term goal of this research is to design negotiation protocols which can be easily used by intelligent agents in accommodating their need in resolving their conflicts. In particular, we propose three modifications: allowing non-monotonic offers during the bargaining (non-monotonic-offers bargaining protocol), allowing strategic delay (delay-based bargaining protocol), and allowing strategic ignorance to augment argumentation when the bargaining comprises argumentation (ignorance-based argumentation-based negotiation protocol). Utility theory and decision-theoretic approaches are used in the theoretical analysis part, with an aim to prove the benefit of these three modifications in negotiation among myopic agents under uncertainty. Empirical studies by means of computer simulation are conducted in analyzing the cost and benefit of these modifications. Social agents, who use common human bargaining strategies, are the subjects of the simulation. In general, we assume that agents are bounded rational with various degrees of belief and trust toward their opponents. In particular in the study of the non-monotonic-offers bargaining protocol, we assume that our agents have diminishing surplus. We further assume that our agents have increasing surplus in the study of delay-based bargaining protocol. And in the study of ignorance-based argumentation-based negotiation protocol, we assume that agents may have different knowledge and use different ontologies and reasoning engines. Through theoretical analysis under various settings, we show the benefit of allowing these modifications in terms of agents’ expected surplus. And through simulation, we show the benefit of allowing these modifications in terms of social welfare (total surplus). Several implementation issues are then discussed, and their potential solutions in terms of some additional policies are proposed. Finally, we also suggest some future work which can potentially improve the reliability of these modifications
    corecore