214,617 research outputs found
von Neumann-Morgenstern and Savage Theorems for Causal Decision Making
Causal thinking and decision making under uncertainty are fundamental aspects
of intelligent reasoning. Decision making under uncertainty has been well
studied when information is considered at the associative (probabilistic)
level. The classical Theorems of von Neumann-Morgenstern and Savage provide a
formal criterion for rational choice using purely associative information.
Causal inference often yields uncertainty about the exact causal structure, so
we consider what kinds of decisions are possible in those conditions. In this
work, we consider decision problems in which available actions and consequences
are causally connected. After recalling a previous causal decision making
result, which relies on a known causal model, we consider the case in which the
causal mechanism that controls some environment is unknown to a rational
decision maker. In this setting we state and prove a causal version of Savage's
Theorem, which we then use to develop a notion of causal games with its
respective causal Nash equilibrium. These results highlight the importance of
causal models in decision making and the variety of potential applications.Comment: Submitted to Journal of Causal Inferenc
An Imprecise Probability Approach for Abstract Argumentation based on Credal Sets
Some abstract argumentation approaches consider that arguments have a degree
of uncertainty, which impacts on the degree of uncertainty of the extensions
obtained from a abstract argumentation framework (AAF) under a semantics. In
these approaches, both the uncertainty of the arguments and of the extensions
are modeled by means of precise probability values. However, in many real life
situations the exact probabilities values are unknown and sometimes there is a
need for aggregating the probability values of different sources. In this
paper, we tackle the problem of calculating the degree of uncertainty of the
extensions considering that the probability values of the arguments are
imprecise. We use credal sets to model the uncertainty values of arguments and
from these credal sets, we calculate the lower and upper bounds of the
extensions. We study some properties of the suggested approach and illustrate
it with an scenario of decision making.Comment: 8 pages, 2 figures, Accepted in The 15th European Conference on
Symbolic and Quantitative Approaches to Reasoning with Uncertainty (ECSQARU
2019
Stochastic Reasoning with Action Probabilistic Logic Programs
In the real world, there is a constant need to reason about the behavior of various entities. A soccer goalie could benefit from information available about past penalty kicks by the same player facing him now. National security experts could benefit from the ability to reason about behaviors of terror groups. By applying
behavioral models, an organization may get a better understanding about how best to target their efforts and achieve their goals.
In this thesis, we propose action probabilistic logic (or ap-) programs, a formalism designed for reasoning about the probability of events whose inter-dependencies are unknown. We investigate how to use ap-programs to reason in the kinds of scenarios described above. Our approach is based on probabilistic logic programming, a well known formalism for reasoning under uncertainty, which has been shown to be highly flexible since it
allows imprecise probabilities to be specified in the form of intervals that convey the inherent uncertainty in the knowledge. Furthermore, no independence assumptions are made, in contrast to many of the probabilistic reasoning formalisms that have been proposed. Up to now, all work in probabilistic logic programming has focused
on the problem of entailment, i.e., verifying if a given formula follows from the available knowledge. In this thesis, we argue that other problems also need to be solved for this kind of reasoning. The three main problems we address are: Computing most probable worlds: what is the most likely set of actions given the current state
of affairs?; answering abductive queries: how can we effect changes in the environment in order to evoke certain desired actions?; and Reasoning about promises: given the importance of promises and how they are fulfilled, how can we incorporate quantitative knowledge about promise fulfillment in ap-programs?
We address different variants of these problems, propose exact and heuristic algorithms to scalably solve them, present empirical evaluations of their performance, and discuss their application in real world scenarios
A logic for reasoning about upper probabilities
We present a propositional logic %which can be used to reason about the
uncertainty of events, where the uncertainty is modeled by a set of probability
measures assigning an interval of probability to each event. We give a sound
and complete axiomatization for the logic, and show that the satisfiability
problem is NP-complete, no harder than satisfiability for propositional logic.Comment: A preliminary version of this paper appeared in Proc. of the 17th
Conference on Uncertainty in AI, 200
- …