28 research outputs found
Assumption-based Argumentation Dialogues
Formal argumentation based dialogue models have attracted some research interests
recently. Within this line of research, we propose a formal model for
argumentation-based dialogues between agents, using assumption-based argumentation
(ABA). Thus, the dialogues amount to conducting an argumentation process
in ABA. The model is given in terms of ABA-specific utterances, debate trees
and forests implicitly built during and drawn from dialogues, legal-move functions
(amounting to protocols) and outcome functions. Moreover, we investigate
the strategic behaviour of agents in dialogues, using strategy-move functions. We
instantiate our dialogue model in a range of dialogue types studied in the literature,
including information-seeking, inquiry, persuasion, conflict resolution, and
discovery. Finally, we prove (1) a formal connection between dialogues and well-known
argumentation semantics, and (2) soundness and completeness results for
our dialogue models and dialogue strategies used in different dialogue types
Abstract Argumentation / Persuasion / Dynamics
The act of persuasion, a key component in rhetoric argumentation, may be
viewed as a dynamics modifier. We extend Dung's frameworks with acts of
persuasion among agents, and consider interactions among attack, persuasion and
defence that have been largely unheeded so far. We characterise basic notions
of admissibilities in this framework, and show a way of enriching them through,
effectively, CTL (computation tree logic) encoding, which also permits
importation of the theoretical results known to the logic into our
argumentation frameworks. Our aim is to complement the growing interest in
coordination of static and dynamic argumentation.Comment: Arisaka R., Satoh K. (2018) Abstract Argumentation / Persuasion /
Dynamics. In: Miller T., Oren N., Sakurai Y., Noda I., Savarimuthu B., Cao
Son T. (eds) PRIMA 2018: Principles and Practice of Multi-Agent Systems.
PRIMA 2018. Lecture Notes in Computer Science, vol 11224. Springer, Cha
Analysis of Dialogical Argumentation via Finite State Machines
Dialogical argumentation is an important cognitive activity by which agents
exchange arguments and counterarguments as part of some process such as
discussion, debate, persuasion and negotiation. Whilst numerous formal systems
have been proposed, there is a lack of frameworks for implementing and
evaluating these proposals. First-order executable logic has been proposed as a
general framework for specifying and analysing dialogical argumentation. In
this paper, we investigate how we can implement systems for dialogical
argumentation using propositional executable logic. Our approach is to present
and evaluate an algorithm that generates a finite state machine that reflects a
propositional executable logic specification for a dialogical argumentation
together with an initial state. We also consider how the finite state machines
can be analysed, with the minimax strategy being used as an illustration of the
kinds of empirical analysis that can be undertaken.Comment: 10 page
Updating probabilistic epistemic states in persuasion dialogues
In persuasion dialogues, the ability of the persuader to model the persuadee allows the persuader to make better choices of move. The epistemic approach to probabilistic argumentation is a promising way of modelling the persuadee’s belief in arguments, and proposals have been made for update methods that specify how these beliefs can be updated at each step of the dialogue. However, there is a need to better understand these proposals, and moreover, to gain insights into the space of possible update functions. So in this paper, we present a general framework for update functions in which we consider existing and novel update functions
CHR for Social Responsibility
Publicly traded corporations often operate against the public\u27s interest, serving a very limited group of stakeholders. This is counter-intuitive, since the public as a whole owns these corporations through direct investment in the stock-market, as well as indirect investment in mutual, index, and pension funds. Interestingly, the public\u27s role in the proxy voting process, which
allows shareholders to influence their company\u27s direction and decisions, is essentially ignored by individual investors. We
speculate that a prime reason for this lack of participation is information overload, and the disproportionate efforts required for an investor to make an informed decision. In this paper we propose a CHR based model that significantly simplifies the decision making process, allowing users to set general guidelines that can be applied to every company they own to produce voting recommendations. The use of CHR here is particularly advantageous as it allows users to easily track back the most relevant data that was used to formulate the decision, without the user having to go through large amounts of irrelevant information. Finally we describe a simplified algorithm that could be used as part of this model
Computationally viable handling of beliefs in arguments for persuasion
Computational models of argument are being developed to capture aspects of how persuasion is undertaken. Recent proposals suggest that in a persuasion dialogue between some agents, it is valuable for each agent to model how arguments are believed by the other agents. Beliefs in arguments can be captured by a joint belief distribution over the arguments and updated as the dialogue progresses. This information can be used by the agent to make more intelligent choices of move in the dialogue. Whilst these proposals indicate the value of modelling the beliefs of other agents, there is a question of the computational viability of using a belief distribution over all the arguments. We address this problem in this paper by presenting how probabilistic independence can be leveraged to split this joint distribution into an equivalent set of distributions of smaller size. Experiments show that updating the belief on the split distribution is more efficient than performing updates on the joint distribution
Towards Computational Persuasion via Natural Language Argumentation Dialogues
Computational persuasion aims to capture the human ability to persuade through argumentation for applications such as behaviour change in healthcare (e.g. persuading people to take more exercise or eat more healthily). In this paper, we review research in computational persuasion that incorporates domain modelling (capturing arguments and counterarguments that can appear in a persuasion dialogues), user modelling (capturing the beliefs and concerns of the persuadee), and dialogue strategies (choosing the best moves for the persuader to maximize the chances that the persuadee is persuaded). We discuss evaluation of prototype systems that get the user’s counterarguments by allowing them to select them from a menu. Then we consider how this work might be enhanced by incorporating a natural language interface in the form of an argumentative chatbot
Two dimensional uncertainty in persuadee modelling in argumentation
When attempting to persuade an agent to believe (or disbelieve) an argument, it can be advantageous for the persuader to have a model of the persuadee. Models have been proposed for taking account of what arguments the persuadee believes and these can be used in a strategy for persuasion. However, there can be uncertainty as to the accuracy of such models. To address this issue, this paper introduces a two-dimensional model that accounts for the
uncertainty of belief by a persuadee and for the confidence in that uncertainty evaluation. This gives a better modeling for using lotteries so that the outcomes involve statements about what the user believes/disbelieves, and the confidence value is the degree to which the user does indeed hold those outcomes (and this is a more refined and more natural modeling than found in [19]). This framework is also extended with a modelling of the risk of disengagement by the persuadee