81 research outputs found

    Influence of State-Variable Constraints on Partially Observable Monte Carlo Planning

    Get PDF
    Online planning methods for partially observable Markov decision processes (POMDPs) have re- cently gained much interest. In this paper, we pro- pose the introduction of prior knowledge in the form of (probabilistic) relationships among dis- crete state-variables, for online planning based on the well-known POMCP algorithm. In particu- lar, we propose the use of hard constraint net- works and probabilistic Markov random fields to formalize state-variable constraints and we extend the POMCP algorithm to take advantage of these constraints. Results on a case study based on Rock- sample show that the usage of this knowledge pro- vides significant improvements to the performance of the algorithm. The extent of this improvement depends on the amount of knowledge encoded in the constraints and reaches the 50% of the average discounted return in the most favorable cases that we analyzed

    Efficient POMDP Forward Search by Predicting the Posterior Belief Distribution

    Get PDF
    Online, forward-search techniques have demonstrated promising results for solving problems in partially observable environments. These techniques depend on the ability to efficiently search and evaluate the set of beliefs reachable from the current belief. However, enumerating or sampling action-observation sequences to compute the reachable beliefs is computationally demanding; coupled with the need to satisfy real-time constraints, existing online solvers can only search to a limited depth. In this paper, we propose that policies can be generated directly from the distribution of the agent's posterior belief. When the underlying state distribution is Gaussian, and the observation function is an exponential family distribution, we can calculate this distribution of beliefs without enumerating the possible observations. This property not only enables us to plan in problems with large observation spaces, but also allows us to search deeper by considering policies composed of multi-step action sequences. We present the Posterior Belief Distribution (PBD) algorithm, an efficient forward-search POMDP planner for continuous domains, demonstrating that better policies are generated when we can perform deeper forward search
    • …
    corecore