2,590 research outputs found
Towards Computational Persuasion via Natural Language Argumentation Dialogues
Computational persuasion aims to capture the human ability to persuade through argumentation for applications such as behaviour change in healthcare (e.g. persuading people to take more exercise or eat more healthily). In this paper, we review research in computational persuasion that incorporates domain modelling (capturing arguments and counterarguments that can appear in a persuasion dialogues), user modelling (capturing the beliefs and concerns of the persuadee), and dialogue strategies (choosing the best moves for the persuader to maximize the chances that the persuadee is persuaded). We discuss evaluation of prototype systems that get the user’s counterarguments by allowing them to select them from a menu. Then we consider how this work might be enhanced by incorporating a natural language interface in the form of an argumentative chatbot
Towards a framework for computational persuasion with applications in behaviour change
Persuasion is an activity that involves one party trying to induce another party to believe something or to do something. It is an important and multifaceted human facility. Obviously, sales and marketing is heavily dependent on persuasion. But many other activities involve persuasion such as a doctor persuading a patient to drink less alcohol, a road safety expert persuading drivers to not text while driving, or an online safety expert persuading users of social media sites to not reveal too much personal information online. As computing becomes involved in every sphere of life, so too is persuasion a target for applying computer-based solutions. An automated persuasion system (APS) is a system that can engage in a dialogue with a user (the persuadee) in order to persuade the persuadee to do (or not do) some action or to believe (or not believe) something. To do this, an APS aims to use convincing arguments in order to persuade the persuadee. Computational persuasion is the study of formal models of dialogues involving arguments and counterarguments, of user models, and strategies, for APSs. A promising application area for computational persuasion is in behaviour change. Within healthcare organizations, government agencies, and non-governmental agencies, there is much interest in changing behaviour of particular groups of people away from actions that are harmful to themselves and/or to others around them
The Bayesian boom: good thing or bad?
A series of high-profile critiques of Bayesian models of cognition have recently sparked controversy. These critiques question the contribution of rational, normative considerations in the study of cognition. The present article takes central claims from these critiques and evaluates them in light of specific models. Closer consideration of actual examples of Bayesian treatments of different cognitive phenomena allows one to defuse these critiques showing that they cannot be sustained across the diversity of applications of the Bayesian framework for cognitive modeling. More generally, there is nothing in the Bayesian framework that would inherently give rise to the deficits that these critiques perceive, suggesting they have been framed at the wrong level of generality. At the same time, the examples are used to demonstrate the different ways in which consideration of rationality uniquely benefits both theory and practice in the study of cognition
Pushing the bounds of rationality: Argumentation and extended cognition
One of the central tasks of a theory of argumentation is to supply a theory of appraisal: a set of standards and norms according to which argumentation, and the reasoning involved in it, is properly evaluated. In their most general form, these can be understood as rational norms, where the core idea of rationality is that we rightly respond to reasons by according the credence we attach to our doxastic and conversational commitments with the probative strength of the reasons we have for them. Certain kinds of rational failings are so because they are manifestly illogical – for example, maintaining overtly contradictory commitments, violating deductive closure by refusing to accept the logical consequences of one’s present commitments, or failing to track basing relations by not updating one’s commitments in view of new, defeating information. Yet, according to the internal and empirical critiques, logic and probability theory fail to supply a fit set of norms for human reasoning and argument. Particularly, theories of bounded rationality have put pressure on argumentation theory to lower the normative standards of rationality for reasoners and arguers on the grounds that we are bounded, finite, and fallible agents incapable of meeting idealized standards. This paper explores the idea that argumentation, as a set of practices, together with the procedures and technologies of argumentation theory, is able to extend cognition such that we are better able to meet these idealized logical standards, thereby extending our responsibilities to adhere to idealized rational norms
Recommended from our members
A Computational Model of Non-Cooperation in Natural Language Dialogue
A common assumption in the study of conversation is that participants fully cooperate in order to maximise the effectiveness of the exchange and ensure communication flow. This assumption persists even in situations in which the private goals of the participants are at odds: they may act strategically pursuing their agendas, but will still adhere to a number of linguistic norms or conventions which are implicitly accepted by a community of language users.
However, in naturally occurring dialogue participants often depart from such norms, for instance, by asking inappropriate questions, by avoiding to provide adequate answers or by volunteering information that is not relevant to the conversation. These are examples of what we call linguistic non-cooperation.
This thesis presents a systematic investigation of linguistic non-cooperation in dialogue. Given a specific activity, in a specific cultural context and time, the method proceeds by making explicit which linguistic behaviours are appropriate. This results in a set of rules: the global dialogue game. Non-cooperation is then measured as instances in which the actions of the participants are not in accordance with these rules. The dialogue game is formally defined in terms of discourse obligations. These are actions that participants are expected to perform at a given point in the dialogue based on the dialogue history. In this context, non-cooperation amounts to participants failing to act according to their obligations.
We propose a general definition of linguistic non-cooperation and give a specific instance for political interview dialogues. Based on the latter, we present an empirical method which involves a coding scheme for the manual annotation of interview transcripts. The degree to which each participant cooperates is automatically determined by contrasting the annotated transcripts with the rules in the dialogue game for political interviews. The approach is evaluated on a corpus of broadcast political interviews and tested for correlation with human judgement on the same corpus.
Further, we describe a model of conversational agents that incorporates the concepts and mechanisms above as part of their dialogue manager. This allows for the generation of conversations in which the agents exhibit varying degrees of cooperation by controlling how often they favour their private goals instead of discharging their discourse obligations
Intentional dialogues in multi-agent systems based on ontologies and argumentation
Some areas of application, for example, healthcare, are known to resist the replacement of human operators by fully autonomous systems. It is typically not transparent to users how artificial intelligence systems make decisions or obtain information, making it difficult for users to trust them. To address this issue, we investigate how argumentation theory and ontology techniques can be used together with reasoning about intentions to build complex natural language dialogues to support human decision-making. Based on such an investigation, we propose MAIDS, a framework for developing multi-agent intentional dialogue systems, which can be used in different domains. Our framework is modular so that it can be used in its entirety or just the modules that fulfil the requirements of each system to be developed. Our work also includes the formalisation of a novel dialogue-subdialogue structure with which we can address ontological or theory-of-mind issues and later return to the main subject. As a case study, we have developed a multi-agent system using the MAIDS framework to support healthcare professionals in making decisions on hospital bed allocations. Furthermore, we evaluated this multi-agent system with domain experts using real data from a hospital. The specialists who evaluated our system strongly agree or agree that the dialogues in which they participated fulfil Cohen’s desiderata for task-oriented dialogue systems. Our agents have the ability to explain to the user how they arrived at certain conclusions. Moreover, they have semantic representations as well as representations of the mental state of the dialogue participants, allowing the formulation of coherent justifications expressed in natural language, therefore, easy for human participants to understand. This indicates the potential of the framework introduced in this thesis for the practical development of explainable intelligent systems as well as systems supporting hybrid intelligence
Strategic Argumentation Dialogues for Persuasion: Framework and Experiments Based on Modelling the Beliefs and Concerns of the Persuadee
Persuasion is an important and yet complex aspect of human intelligence. When
undertaken through dialogue, the deployment of good arguments, and therefore
counterarguments, clearly has a significant effect on the ability to be
successful in persuasion. Two key dimensions for determining whether an
argument is good in a particular dialogue are the degree to which the intended
audience believes the argument and counterarguments, and the impact that the
argument has on the concerns of the intended audience. In this paper, we
present a framework for modelling persuadees in terms of their beliefs and
concerns, and for harnessing these models in optimizing the choice of move in
persuasion dialogues. Our approach is based on the Monte Carlo Tree Search
which allows optimization in real-time. We provide empirical results of a study
with human participants showing that our automated persuasion system based on
this technology is superior to a baseline system that does not take the beliefs
and concerns into account in its strategy.Comment: The Data Appendix containing the arguments, argument graphs,
assignment of concerns to arguments, preferences over concerns, and
assignment of beliefs to arguments, is available at the link
http://www0.cs.ucl.ac.uk/staff/a.hunter/papers/unistudydata.zip The code is
available at https://github.com/ComputationalPersuasion/MCC
- …