154 research outputs found
Optimization of dialectical outcomes in dialogical argumentation
When informal arguments are presented, there may be imprecision in the language
used, and so the audience may be uncertain as to the structure of the argument
graph as intended by the presenter of the arguments. For a presenter of
arguments, it is useful to know the audience’s argument graph, but the presenter
may be uncertain as to the structure of it. To model the uncertainty as to the structure
of the argument graph in situations such as these, we can use probabilistic
argument graphs. The set of subgraphs of an argument graph is a sample space. A
probability value is assigned to each subgraph such that the sum is 1, thereby re-
flecting the uncertainty over which is the actual subgraph. We can then determine
the probability that a particular set of arguments is included or excluded from an
extension according to a particular Dung semantics. We represent and reason with
extensions from a graph and from its subgraphs, using a logic of dialectical outcomes
that we present. We harness this to define the notion of an argumentation
lottery, which can be used by the audience to determine the expected utility of a
debate, and can be used by the presenter to decide which arguments to present by
choosing those that maximize expected utility. We investigate some of the options
for using argumentation lotteries, and provide a computational evaluation
Empirical Evaluation of Abstract Argumentation: Supporting the Need for Bipolar and Probabilistic Approaches
In dialogical argumentation it is often assumed that the involved parties
always correctly identify the intended statements posited by each other,
realize all of the associated relations, conform to the three acceptability
states (accepted, rejected, undecided), adjust their views when new and correct
information comes in, and that a framework handling only attack relations is
sufficient to represent their opinions. Although it is natural to make these
assumptions as a starting point for further research, removing them or even
acknowledging that such removal should happen is more challenging for some of
these concepts than for others. Probabilistic argumentation is one of the
approaches that can be harnessed for more accurate user modelling. The
epistemic approach allows us to represent how much a given argument is believed
by a given person, offering us the possibility to express more than just three
agreement states. It is equipped with a wide range of postulates, including
those that do not make any restrictions concerning how initial arguments should
be viewed, thus potentially being more adequate for handling beliefs of the
people that have not fully disclosed their opinions in comparison to Dung's
semantics. The constellation approach can be used to represent the views of
different people concerning the structure of the framework we are dealing with,
including cases in which not all relations are acknowledged or when they are
seen differently than intended. Finally, bipolar argumentation frameworks can
be used to express both positive and negative relations between arguments. In
this paper we describe the results of an experiment in which participants
judged dialogues in terms of agreement and structure. We compare our findings
with the aforementioned assumptions as well as with the constellation and
epistemic approaches to probabilistic argumentation and bipolar argumentation
Two dimensional uncertainty in persuadee modelling in argumentation
When attempting to persuade an agent to believe (or disbelieve) an argument, it can be advantageous for the persuader to have a model of the persuadee. Models have been proposed for taking account of what arguments the persuadee believes and these can be used in a strategy for persuasion. However, there can be uncertainty as to the accuracy of such models. To address this issue, this paper introduces a two-dimensional model that accounts for the
uncertainty of belief by a persuadee and for the confidence in that uncertainty evaluation. This gives a better modeling for using lotteries so that the outcomes involve statements about what the user believes/disbelieves, and the confidence value is the degree to which the user does indeed hold those outcomes (and this is a more refined and more natural modeling than found in [19]). This framework is also extended with a modelling of the risk of disengagement by the persuadee
Towards a framework for computational persuasion with applications in behaviour change
Persuasion is an activity that involves one party trying to induce another party to believe something or to do something. It is an important and multifaceted human facility. Obviously, sales and marketing is heavily dependent on persuasion. But many other activities involve persuasion such as a doctor persuading a patient to drink less alcohol, a road safety expert persuading drivers to not text while driving, or an online safety expert persuading users of social media sites to not reveal too much personal information online. As computing becomes involved in every sphere of life, so too is persuasion a target for applying computer-based solutions. An automated persuasion system (APS) is a system that can engage in a dialogue with a user (the persuadee) in order to persuade the persuadee to do (or not do) some action or to believe (or not believe) something. To do this, an APS aims to use convincing arguments in order to persuade the persuadee. Computational persuasion is the study of formal models of dialogues involving arguments and counterarguments, of user models, and strategies, for APSs. A promising application area for computational persuasion is in behaviour change. Within healthcare organizations, government agencies, and non-governmental agencies, there is much interest in changing behaviour of particular groups of people away from actions that are harmful to themselves and/or to others around them
Weighted logics for artificial intelligence : an introductory discussion
International audienceBefore presenting the contents of the special issue, we propose a structured introductory overview of a landscape of the weighted logics (in a general sense) that can be found in the Artificial Intelligence literature, highlighting their fundamental differences and their application areas
Belief in attacks in epistemic probabilistic argumentation
The epistemic approach to probabilistic argumentation assigns belief to arguments. This is valuable in dialogical argumentation where one agent can model the beliefs another agent has in the arguments and this can be harnessed to make strategic choices of arguments to present. In this paper, we extend this epistemic approach by also representing the belief in attacks. We investigate properties of this proposal and compare it to the constellations approach showing neither subsumes the other
Argumentation as a practical foundation for decision theory
Imperial Users onl
Recommended from our members
Opinions and Preferences as Socially Distributed Attitudes
The dissertation focuses on how to best represent the consensus and attitude dynamic of a group given the attitudes of its individuals. This is done in the Bayesian epistemology framework using pooling with imprecise probabilities, and in utility theory extending Harsanyi's aggregation theorem to characterize other directed attitudes like spite and altruism. The final part of the dissertation considers attitudes within social networks and provides explanations and simulation models for online segregation and tribalism as well as the spread of rumors through contagion. The dissertation hopes to contribute to foundational issues like that of epistemic consensus, but also to new emerging phenomena in social epistemology
- …