19,999 research outputs found
Perseus: Randomized Point-based Value Iteration for POMDPs
Partially observable Markov decision processes (POMDPs) form an attractive
and principled framework for agent planning under uncertainty. Point-based
approximate techniques for POMDPs compute a policy based on a finite set of
points collected in advance from the agents belief space. We present a
randomized point-based value iteration algorithm called Perseus. The algorithm
performs approximate value backup stages, ensuring that in each backup stage
the value of each point in the belief set is improved; the key observation is
that a single backup may improve the value of many belief points. Contrary to
other point-based methods, Perseus backs up only a (randomly selected) subset
of points in the belief set, sufficient for improving the value of each belief
point in the set. We show how the same idea can be extended to dealing with
continuous action spaces. Experimental results show the potential of Perseus in
large scale POMDP problems
Dynamic Non-Bayesian Decision Making
The model of a non-Bayesian agent who faces a repeated game with incomplete
information against Nature is an appropriate tool for modeling general
agent-environment interactions. In such a model the environment state
(controlled by Nature) may change arbitrarily, and the feedback/reward function
is initially unknown. The agent is not Bayesian, that is he does not form a
prior probability neither on the state selection strategy of Nature, nor on his
reward function. A policy for the agent is a function which assigns an action
to every history of observations and actions. Two basic feedback structures are
considered. In one of them -- the perfect monitoring case -- the agent is able
to observe the previous environment state as part of his feedback, while in the
other -- the imperfect monitoring case -- all that is available to the agent is
the reward obtained. Both of these settings refer to partially observable
processes, where the current environment state is unknown. Our main result
refers to the competitive ratio criterion in the perfect monitoring case. We
prove the existence of an efficient stochastic policy that ensures that the
competitive ratio is obtained at almost all stages with an arbitrarily high
probability, where efficiency is measured in terms of rate of convergence. It
is further shown that such an optimal policy does not exist in the imperfect
monitoring case. Moreover, it is proved that in the perfect monitoring case
there does not exist a deterministic policy that satisfies our long run
optimality criterion. In addition, we discuss the maxmin criterion and prove
that a deterministic efficient optimal strategy does exist in the imperfect
monitoring case under this criterion. Finally we show that our approach to
long-run optimality can be viewed as qualitative, which distinguishes it from
previous work in this area.Comment: See http://www.jair.org/ for any accompanying file
Sensor Synthesis for POMDPs with Reachability Objectives
Partially observable Markov decision processes (POMDPs) are widely used in
probabilistic planning problems in which an agent interacts with an environment
using noisy and imprecise sensors. We study a setting in which the sensors are
only partially defined and the goal is to synthesize "weakest" additional
sensors, such that in the resulting POMDP, there is a small-memory policy for
the agent that almost-surely (with probability~1) satisfies a reachability
objective. We show that the problem is NP-complete, and present a symbolic
algorithm by encoding the problem into SAT instances. We illustrate trade-offs
between the amount of memory of the policy and the number of additional sensors
on a simple example. We have implemented our approach and consider three
classical POMDP examples from the literature, and show that in all the examples
the number of sensors can be significantly decreased (as compared to the
existing solutions in the literature) without increasing the complexity of the
policies.Comment: arXiv admin note: text overlap with arXiv:1511.0845
- …