10,146 research outputs found

    Dynamic Non-Bayesian Decision Making

    Full text link
    The model of a non-Bayesian agent who faces a repeated game with incomplete information against Nature is an appropriate tool for modeling general agent-environment interactions. In such a model the environment state (controlled by Nature) may change arbitrarily, and the feedback/reward function is initially unknown. The agent is not Bayesian, that is he does not form a prior probability neither on the state selection strategy of Nature, nor on his reward function. A policy for the agent is a function which assigns an action to every history of observations and actions. Two basic feedback structures are considered. In one of them -- the perfect monitoring case -- the agent is able to observe the previous environment state as part of his feedback, while in the other -- the imperfect monitoring case -- all that is available to the agent is the reward obtained. Both of these settings refer to partially observable processes, where the current environment state is unknown. Our main result refers to the competitive ratio criterion in the perfect monitoring case. We prove the existence of an efficient stochastic policy that ensures that the competitive ratio is obtained at almost all stages with an arbitrarily high probability, where efficiency is measured in terms of rate of convergence. It is further shown that such an optimal policy does not exist in the imperfect monitoring case. Moreover, it is proved that in the perfect monitoring case there does not exist a deterministic policy that satisfies our long run optimality criterion. In addition, we discuss the maxmin criterion and prove that a deterministic efficient optimal strategy does exist in the imperfect monitoring case under this criterion. Finally we show that our approach to long-run optimality can be viewed as qualitative, which distinguishes it from previous work in this area.Comment: See http://www.jair.org/ for any accompanying file

    von Neumann-Morgenstern and Savage Theorems for Causal Decision Making

    Full text link
    Causal thinking and decision making under uncertainty are fundamental aspects of intelligent reasoning. Decision making under uncertainty has been well studied when information is considered at the associative (probabilistic) level. The classical Theorems of von Neumann-Morgenstern and Savage provide a formal criterion for rational choice using purely associative information. Causal inference often yields uncertainty about the exact causal structure, so we consider what kinds of decisions are possible in those conditions. In this work, we consider decision problems in which available actions and consequences are causally connected. After recalling a previous causal decision making result, which relies on a known causal model, we consider the case in which the causal mechanism that controls some environment is unknown to a rational decision maker. In this setting we state and prove a causal version of Savage's Theorem, which we then use to develop a notion of causal games with its respective causal Nash equilibrium. These results highlight the importance of causal models in decision making and the variety of potential applications.Comment: Submitted to Journal of Causal Inferenc

    Game theory

    Get PDF
    game theory
    • …
    corecore