2,514 research outputs found

    A Model Approximation Scheme for Planning in Partially Observable Stochastic Domains

    Full text link
    Partially observable Markov decision processes (POMDPs) are a natural model for planning problems where effects of actions are nondeterministic and the state of the world is not completely observable. It is difficult to solve POMDPs exactly. This paper proposes a new approximation scheme. The basic idea is to transform a POMDP into another one where additional information is provided by an oracle. The oracle informs the planning agent that the current state of the world is in a certain region. The transformed POMDP is consequently said to be region observable. It is easier to solve than the original POMDP. We propose to solve the transformed POMDP and use its optimal policy to construct an approximate policy for the original POMDP. By controlling the amount of additional information that the oracle provides, it is possible to find a proper tradeoff between computational time and approximation quality. In terms of algorithmic contributions, we study in details how to exploit region observability in solving the transformed POMDP. To facilitate the study, we also propose a new exact algorithm for general POMDPs. The algorithm is conceptually simple and yet is significantly more efficient than all previous exact algorithms.Comment: See http://www.jair.org/ for any accompanying file

    Optimal control of continuous-time Markov chains with noise-free observation

    Full text link
    We consider an infinite horizon optimal control problem for a continuous-time Markov chain XX in a finite set II with noise-free partial observation. The observation process is defined as Yt=h(Xt)Y_t = h(X_t), t≥0t \geq 0, where hh is a given map defined on II. The observation is noise-free in the sense that the only source of randomness is the process XX itself. The aim is to minimize a discounted cost functional and study the associated value function VV. After transforming the control problem with partial observation into one with complete observation (the separated problem) using filtering equations, we provide a link between the value function vv associated to the latter control problem and the original value function VV. Then, we present two different characterizations of vv (and indirectly of VV): on one hand as the unique fixed point of a suitably defined contraction mapping and on the other hand as the unique constrained viscosity solution (in the sense of Soner) of a HJB integro-differential equation. Under suitable assumptions, we finally prove the existence of an optimal control

    Cost optimal control of Piecewise Deterministic Markov Processes under partial observation

    Get PDF
    This work deals with the optimal control problem for Piecewise Deterministic Markov Processes (PDMP) under Partial Observation (PO). The total expected discounted cost over lifetime shall be minimized while neither the states of the PDMP nor the current or cumulated cost are observable. Only noisy measurements (with known noise distribution) of the post-jump states are observable. The cost function, however, depends on the trajectory of the unobservable PDMP as well as on the observable noisy measurements of the post-jump states. Admissible control strategies are history dependent relaxed piecewise open loop strategies: For each point in time and depending on the observable history up to this time, a probability distribution on the action space is selected. This probability distribution defines an expected control action on the jump rate, the drift and the transition kernel at jump times of the PDMP. We first transform the initial continuous-time optimization problem under PO into an equivalent discrete-time optimization problem under PO. For the latter one, we obtain a recursive formulation for the filter: the probability distribution of the unobservable post-jump state of the PDMP given the observable history. This leads to an equivalent fully observable optimization problem in discrete time. Classical approaches of stochastic dynamic programming in combination with results for measurable selection of optimizers are then applied to prove the existence of optimal control strategies. We derive sufficient conditions for the existence of optimal control strategies for lower semi-continuous cost functions and in the case of finite dimensional filters, i.e. if the set of possible post-jump states of the PDMP is finite. Finally, we apply the theory developed in this work to a concrete example of a three states problem that could arise, e.g., from moving particles facing disturbances of their trajectories at random points in time

    Perseus: Randomized Point-based Value Iteration for POMDPs

    Full text link
    Partially observable Markov decision processes (POMDPs) form an attractive and principled framework for agent planning under uncertainty. Point-based approximate techniques for POMDPs compute a policy based on a finite set of points collected in advance from the agents belief space. We present a randomized point-based value iteration algorithm called Perseus. The algorithm performs approximate value backup stages, ensuring that in each backup stage the value of each point in the belief set is improved; the key observation is that a single backup may improve the value of many belief points. Contrary to other point-based methods, Perseus backs up only a (randomly selected) subset of points in the belief set, sufficient for improving the value of each belief point in the set. We show how the same idea can be extended to dealing with continuous action spaces. Experimental results show the potential of Perseus in large scale POMDP problems
    • …
    corecore