4 research outputs found

    Tractable POMDP-planning for robots with complex non-linear dynamics

    Get PDF
    Planning under partial observability is an essential capability of autonomous robots. While robots operate in the real world, they are inherently subject to various uncertainties such a control and sensing errors, and limited information regarding the operating environment.Conceptually these type of planning problems can be solved in a principled manner when framed as a Partially Observable Markov Decision Process (POMDP). POMDPs model the aforementioned uncertainties as conditional probability functions and estimate the state of the system as probability functions over the state space, called beliefs. Instead of computing the best strategy with respect to single states, POMDP solvers compute the best strategy with respect to beliefs. Solving a POMDP exactly is computationally intractable in general.However, in the past two decades we have seen tremendous progress in the development of approximately optimal solvers that trade optimality for computational tractability. Despite this progress, approximately solving POMDPs for systems with complex non-linear dynamics remains challenging. Most state-of-the-art solvers rely on a large number of expensive forward simulations of the system to find an approximate-optimal strategy. For systems with complex non-linear dynamics that admit no closed-form solution, this strategy can become prohibitively expensive. Another difficulty in applying POMDPs to physical robots with complex transition dynamics is the fact that almost all implementations of state-of-the-art on-line POMDP solvers restrict the user to specific data structures for the POMDP model, and the model has to be hard-coded within the solver implementation. This, in turn, severely hinders the process of applying POMDPs to physical robots.In this thesis we aim to make POMDPs more practical for realistic robotic motion planning tasks under partial observability. We show that systematic approximations of complex, non-linear transition dynamics can be used to design on-line POMDP solvers that are more efficient than current solvers. Furthermore, we propose a new software-framework that supports the user in modeling complex planning problems under uncertainty with minimal implementation effort

    A software framework for planning under partial observability

    No full text
    Planning under partial observability is both challenging and critical for reliable robot operation. The past decade has seen substantial advances in this domain: The mathematically principled approach for addressing such problems, namely the Partially Observable Markov Decision Process (POMDP), has started to become practical for various robotics tasks. Good approximate solutions for problems framed as POMDPs can now be computed on-line, with a few classes of problems being solved in near real-time. However, applications of these more recent advances are often hindered by the lack of easy-to-use software tools. Implementation of state of the art algorithms exist, but most (if not all) require the POMDP model to be hard-coded inside the program, increasing the difficulty of applying them. To alleviate this problem, we propose a software toolkit, called On-line POMDP Planning Toolkit (OPPT) (downloadable from http://robotics.itee.uq.edu.au/similar to oppt). By providing a well-defined and general abstract solver API, OPPT enables the user to quickly implement new POMDP solvers. Furthermore, OPPT provides an easy-to-use plug-in architecture with interfaces to the high-fidelity simulator Gazebo that, in conjunction with user-friendly configuration files, allows users to specify POMDP models of a standard class of robot motion planning under partial observability problems with no additional coding effort
    corecore