485 research outputs found
A Model Approximation Scheme for Planning in Partially Observable Stochastic Domains
Partially observable Markov decision processes (POMDPs) are a natural model
for planning problems where effects of actions are nondeterministic and the
state of the world is not completely observable. It is difficult to solve
POMDPs exactly. This paper proposes a new approximation scheme. The basic idea
is to transform a POMDP into another one where additional information is
provided by an oracle. The oracle informs the planning agent that the current
state of the world is in a certain region. The transformed POMDP is
consequently said to be region observable. It is easier to solve than the
original POMDP. We propose to solve the transformed POMDP and use its optimal
policy to construct an approximate policy for the original POMDP. By
controlling the amount of additional information that the oracle provides, it
is possible to find a proper tradeoff between computational time and
approximation quality. In terms of algorithmic contributions, we study in
details how to exploit region observability in solving the transformed POMDP.
To facilitate the study, we also propose a new exact algorithm for general
POMDPs. The algorithm is conceptually simple and yet is significantly more
efficient than all previous exact algorithms.Comment: See http://www.jair.org/ for any accompanying file
Decision-Theoretic Planning with Person Trajectory Prediction for Social Navigation
Robots navigating in a social way should reason about people intentions
when acting. For instance, in applications like robot guidance or meeting with a
person, the robot has to consider the goals of the people. Intentions are inherently nonobservable,
and thus we propose Partially Observable Markov Decision Processes
(POMDPs) as a decision-making tool for these applications. One of the issues with
POMDPs is that the prediction models are usually handcrafted. In this paper, we use
machine learning techniques to build prediction models from observations. A novel
technique is employed to discover points of interest (goals) in the environment, and a
variant of Growing Hidden Markov Models (GHMMs) is used to learn the transition
probabilities of the POMDP. The approach is applied to an autonomous telepresence
robot
Stochastic Shortest Path with Energy Constraints in POMDPs
We consider partially observable Markov decision processes (POMDPs) with a
set of target states and positive integer costs associated with every
transition. The traditional optimization objective (stochastic shortest path)
asks to minimize the expected total cost until the target set is reached. We
extend the traditional framework of POMDPs to model energy consumption, which
represents a hard constraint. The energy levels may increase and decrease with
transitions, and the hard constraint requires that the energy level must remain
positive in all steps till the target is reached. First, we present a novel
algorithm for solving POMDPs with energy levels, developing on existing POMDP
solvers and using RTDP as its main method. Our second contribution is related
to policy representation. For larger POMDP instances the policies computed by
existing solvers are too large to be understandable. We present an automated
procedure based on machine learning techniques that automatically extracts
important decisions of the policy allowing us to compute succinct human
readable policies. Finally, we show experimentally that our algorithm performs
well and computes succinct policies on a number of POMDP instances from the
literature that were naturally enhanced with energy levels.Comment: Technical report accompanying a paper published in proceedings of
AAMAS 201
Nonapproximability Results for Partially Observable Markov Decision Processes
We show that for several variations of partially observable Markov decision
processes, polynomial-time algorithms for finding control policies are unlikely
to or simply don't have guarantees of finding policies within a constant factor
or a constant summand of optimal. Here "unlikely" means "unless some complexity
classes collapse," where the collapses considered are P=NP, P=PSPACE, or P=EXP.
Until or unless these collapses are shown to hold, any control-policy designer
must choose between such performance guarantees and efficient computation
Dec-POMDPs as Non-Observable MDPs
A recent insight in the field of decentralized partially observable Markov decision processes (Dec-POMDPs) is that it is possible to convert a Dec-POMDP to a non-observable MDP, which is a special case of POMDP. This technical report provides an overview of this reduction and pointers to related literature
Verification and control of partially observable probabilistic systems
We present automated techniques for the verification and control of partially observable, probabilistic systems for both discrete and dense models of time. For the discrete-time case, we formally model these systems using partially observable Markov decision processes; for dense time, we propose an extension of probabilistic timed automata in which local states are partially visible to an observer or controller. We give probabilistic temporal logics that can express a range of quantitative properties of these models, relating to the probability of an event’s occurrence or the expected value of a reward measure. We then propose techniques to either verify that such a property holds or synthesise a controller for the model which makes it true. Our approach is based on a grid-based abstraction of the uncountable belief space induced by partial observability and, for dense-time models, an integer discretisation of real-time behaviour. The former is necessarily approximate since the underlying problem is undecidable, however we show how both lower and upper bounds on numerical results can be generated. We illustrate the effectiveness of the approach by implementing it in the PRISM model checker and applying it to several case studies from the domains of task and network scheduling, computer security and planning
- …