10 research outputs found

    Control Theory Meets POMDPs: A Hybrid Systems Approach

    Get PDF
    Partially observable Markov decision processes(POMDPs) provide a modeling framework for a variety of sequential decision making under uncertainty scenarios in artificial intelligence (AI). Since the states are not directly observable ina POMDP, decision making has to be performed based on the output of a Bayesian filter (continuous beliefs); hence, making POMDPs intractable to solve and analyze. To overcome the complexity challenge of POMDPs, we apply techniques from control theory. Our contributions are fourfold: (i) We begin by casting the problem of analyzing a POMDP into analyzing the behavior of a discrete-time switched system. Then, (ii) in order to estimate the reachable belief space of a POMDP, i.e., the set of all possible evolutions given an initial belief distribution over the states and a set of actions and observations, we find over-approximations in terms of sub-level sets of Lyapunov-like functions. Furthermore, (iii) in order to verify safety and performance requirements of a given POMDP, we formulate a barrier certificate theorem

    Decision making in an uncertain world

    Get PDF
    Campus Scene; Undatedhttps://egrove.olemiss.edu/phay_laf/1508/thumbnail.jp

    Decision making in an uncertain world

    Get PDF

    Multi-Policy Decision Making for Reliable Navigation in Dynamic Uncertain Environments

    Full text link
    Navigating everyday social environments, in the presence of pedestrians and other dynamic obstacles remains one of the key challenges preventing mobile robots from leaving carefully designed spaces and entering our daily lives. The complex and tightly-coupled interactions between these agents make the environment dynamic and unpredictable, posing a formidable problem for robot motion planning. Trajectory planning methods, supported by models of typical human behavior and personal space, often produce reasonable behavior. However, they do not account for the future closed-loop interactions of other agents with the trajectory being constructed. As a consequence, the trajectories are unable to anticipate cooperative interactions (such as a human yielding), or adverse interactions (such as the robot blocking the way). Ideally, the robot must account for coupled agent-agent interactions while reasoning about possible future outcomes, and then take actions to advance towards its navigational goal without inconveniencing nearby pedestrians. Multi-Policy Decision Making (MPDM) is a novel framework for autonomous navigation in dynamic, uncertain environments where the robot's trajectory is not explicitly planned, but instead, the robot dynamically switches between a set of candidate closed-loop policies, allowing it to adapt to different situations encountered in such environments. The candidate policies are evaluated based on short-term (five-second) forward simulations of samples drawn from the estimated distribution of the agents' current states. These forward simulations and thereby the cost function, capture agent-agent interactions as well as agent-robot interactions which depend on the ego-policy being evaluated. In this thesis, we propose MPDM as a new method for navigation amongst pedestrians by dynamically switching from amongst a library of closed-loop policies. Due to real-time constraints, the robot's emergent behavior is directly affected by the quality of policy evaluation. Approximating how good a policy is based on only a few forward roll-outs is difficult, especially with the large space of possible pedestrian configurations and the sensitivity of the forward simulation to the sampled configurations. Traditional methods based on Monte-Carlo sampling often missed likely, high-cost outcomes, resulting in an over-optimistic evaluation of a policy and unreliable emergent behavior. By re-formulating policy evaluation as an optimization problem and enabling the quick discovery of potentially dangerous outcomes, we make MPDM more reliable and risk-aware. Even with the increased reliability, a major limitation is that MPDM requires the system designer to provide a set of carefully hand-crafted policies as it can evaluate only a few policies reliably in real-time. We radically enhance the expressivity of MPDM by allowing policies to have continuous-valued parameters, while simultaneously satisfying real-time constraints by quickly discovering promising policy parameters through a novel iterative gradient-based algorithm. Overall, we reformulate the traditional motion planning problem and paint it in a very different light --- as a bilevel optimization problem where the robot repeatedly discovers likely high-cost outcomes and adapts its policy parameters avoid these outcomes. We demonstrate significant performance benefits through extensive experiments in simulation as well as on a physical robot platform operating in a semi-crowded environment.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/150017/1/dhanvinm_1.pd
    corecore