4 research outputs found
Belief Revision in Structured Probabilistic Argumentation
In real-world applications, knowledge bases consisting of all the information
at hand for a specific domain, along with the current state of affairs, are
bound to contain contradictory data coming from different sources, as well as
data with varying degrees of uncertainty attached. Likewise, an important
aspect of the effort associated with maintaining knowledge bases is deciding
what information is no longer useful; pieces of information (such as
intelligence reports) may be outdated, may come from sources that have recently
been discovered to be of low quality, or abundant evidence may be available
that contradicts them. In this paper, we propose a probabilistic structured
argumentation framework that arises from the extension of Presumptive
Defeasible Logic Programming (PreDeLP) with probabilistic models, and argue
that this formalism is capable of addressing the basic issues of handling
contradictory and uncertain data. Then, to address the last issue, we focus on
the study of non-prioritized belief revision operations over probabilistic
PreDeLP programs. We propose a set of rationality postulates -- based on
well-known ones developed for classical knowledge bases -- that characterize
how such operations should behave, and study a class of operators along with
theoretical relationships with the proposed postulates, including a
representation theorem stating the equivalence between this class and the class
of operators characterized by the postulates
The added value of argumentation
We discuss the value of argumentation in reaching agreements, based on its capability for dealing with conflicts and uncertainty. Logic-based models of argumentation have recently emerged as a key topic within Artificial Intelligence. Key reasons for the success of these models is that they are akin to human models of reasoning and debate, and their generalisation to frameworks for modelling dialogues. They therefore have the potential for bridging between human and machine reasoning in the presence of uncertainty and conflict. We provide an overview of a number of examples that bear witness to this potential, and that illustrate the added value of argumentation. These examples amount to methods and techniques for argumentation to aid machine reasoning (e.g. in the form of machine learning and belief functions) on the one hand and methods and techniques for argumentation to aid human reasoning (e.g. for various forms of decision making and deliberation and for the Web) on the other. We also identify a number of open challenges if this potential is to be realised, and in particular the need for benchmark libraries