75,963 research outputs found
A Formal Framework for Concrete Reputation Systems
In a reputation-based trust-management system, agents maintain information about the past behaviour of other agents. This information is used to guide future trust-based decisions about interaction. However, while trust management is a component in security decision-making, many existing reputation-based trust-management systems provide no formal security-guarantees. In this extended abstract, we describe a mathematical framework for a class of simple reputation-based systems. In these systems, decisions about interaction are taken based on policies that are exact requirements on agents’ past histories. We present a basic declarative language, based on pure-past linear temporal logic, intended for writing simple policies. While the basic language is reasonably expressive (encoding e.g. Chinese Wall policies) we show how one can extend it with quantification and parameterized events. This allows us to encode other policies known from the literature, e.g., ‘one-out-of-k’. The problem of checking a history with respect to a policy is efficient for the basic language, and tractable for the quantified language when policies do not have too many variables
Fuzzy argumentation for trust
In an open Multi-Agent System, the goals of agents acting on behalf of their owners often conflict with each other. Therefore, a personal agent protecting the interest of a single user cannot always rely on them. Consequently, such a personal agent needs to be able to reason about trusting (information or services provided by) other agents. Existing algorithms that perform such reasoning mainly focus on the immediate utility of a trusting decision, but do not provide an explanation of their actions to the user. This may hinder the acceptance of agent-based technologies in sensitive applications where users need to rely on their personal agents. Against this background, we propose a new approach to trust based on argumentation that aims to expose the rationale behind such trusting decisions. Our solution features a separation of opponent modeling and decision making. It uses possibilistic logic to model behavior of opponents, and we propose an extension of the argumentation framework by Amgoud and Prade to use the fuzzy rules within these models for well-supported decisions
On computing explanations in argumentation
Copyright © 2015, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.Argumentation can be viewed as a process of generating explanations. However, existing argumentation semantics are developed for identifying acceptable arguments within a set, rather than giving concrete justifications for them. In this work, we propose a new argumentation semantics, related admissibility, designed for giving explanations to arguments in both Abstract Argumentation and Assumption-based Argumentation. We identify different types of explanations defined in terms of the new semantics. We also give a correct computational counterpart for explanations using dispute forests
Decision-Making with Belief Functions: a Review
Approaches to decision-making under uncertainty in the belief function
framework are reviewed. Most methods are shown to blend criteria for decision
under ignorance with the maximum expected utility principle of Bayesian
decision theory. A distinction is made between methods that construct a
complete preference relation among acts, and those that allow incomparability
of some acts due to lack of information. Methods developed in the imprecise
probability framework are applicable in the Dempster-Shafer context and are
also reviewed. Shafer's constructive decision theory, which substitutes the
notion of goal for that of utility, is described and contrasted with other
approaches. The paper ends by pointing out the need to carry out deeper
investigation of fundamental issues related to decision-making with belief
functions and to assess the descriptive, normative and prescriptive values of
the different approaches
Order-of-Magnitude Influence Diagrams
In this paper, we develop a qualitative theory of influence diagrams that can
be used to model and solve sequential decision making tasks when only
qualitative (or imprecise) information is available. Our approach is based on
an order-of-magnitude approximation of both probabilities and utilities and
allows for specifying partially ordered preferences via sets of utility values.
We also propose a dedicated variable elimination algorithm that can be applied
for solving order-of-magnitude influence diagrams
- …