11,984 research outputs found

    Graceful Navigation for Mobile Robots in Dynamic and Uncertain Environments.

    Full text link
    The ability to navigate in everyday environments is a fundamental and necessary skill for any autonomous mobile agent that is intended to work with human users. The presence of pedestrians and other dynamic objects, however, makes the environment inherently dynamic and uncertain. To navigate in such environments, an agent must reason about the near future and make an optimal decision at each time step so that it can move safely toward the goal. Furthermore, for any application intended to carry passengers, it also must be able to move smoothly and comfortably, and the robot behavior needs to be customizable to match the preference of the individual users. Despite decades of progress in the field of motion planning and control, this remains a difficult challenge with existing methods. In this dissertation, we show that safe, comfortable, and customizable mobile robot navigation in dynamic and uncertain environments can be achieved via stochastic model predictive control. We view the problem of navigation in dynamic and uncertain environments as a continuous decision making process, where an agent with short-term predictive capability reasons about its situation and makes an informed decision at each time step. The problem of robot navigation in dynamic and uncertain environments is formulated as an on-line, finite-horizon policy and trajectory optimization problem under uncertainty. With our formulation, planning and control becomes fully integrated, which allows direct optimization of the performance measure. Furthermore, with our approach the problem becomes easy to solve, which allows our algorithm to run in real time on a single core of a typical laptop with off-the-shelf optimization packages. The work presented in this thesis extends the state-of-the-art in analytic control of mobile robots, sampling-based optimal path planning, and stochastic model predictive control. We believe that our work is a significant step toward safe and reliable autonomous navigation that is acceptable to human users.PhDMechanical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/120760/1/jongjinp_1.pd

    Real-Time Predictive Modeling and Robust Avoidance of Pedestrians with Uncertain, Changing Intentions

    Full text link
    To plan safe trajectories in urban environments, autonomous vehicles must be able to quickly assess the future intentions of dynamic agents. Pedestrians are particularly challenging to model, as their motion patterns are often uncertain and/or unknown a priori. This paper presents a novel changepoint detection and clustering algorithm that, when coupled with offline unsupervised learning of a Gaussian process mixture model (DPGP), enables quick detection of changes in intent and online learning of motion patterns not seen in prior training data. The resulting long-term movement predictions demonstrate improved accuracy relative to offline learning alone, in terms of both intent and trajectory prediction. By embedding these predictions within a chance-constrained motion planner, trajectories which are probabilistically safe to pedestrian motions can be identified in real-time. Hardware experiments demonstrate that this approach can accurately predict pedestrian motion patterns from onboard sensor/perception data and facilitate robust navigation within a dynamic environment.Comment: Submitted to 2014 International Workshop on the Algorithmic Foundations of Robotic

    Combining Subgoal Graphs with Reinforcement Learning to Build a Rational Pathfinder

    Full text link
    In this paper, we present a hierarchical path planning framework called SG-RL (subgoal graphs-reinforcement learning), to plan rational paths for agents maneuvering in continuous and uncertain environments. By "rational", we mean (1) efficient path planning to eliminate first-move lags; (2) collision-free and smooth for agents with kinematic constraints satisfied. SG-RL works in a two-level manner. At the first level, SG-RL uses a geometric path-planning method, i.e., Simple Subgoal Graphs (SSG), to efficiently find optimal abstract paths, also called subgoal sequences. At the second level, SG-RL uses an RL method, i.e., Least-Squares Policy Iteration (LSPI), to learn near-optimal motion-planning policies which can generate kinematically feasible and collision-free trajectories between adjacent subgoals. The first advantage of the proposed method is that SSG can solve the limitations of sparse reward and local minima trap for RL agents; thus, LSPI can be used to generate paths in complex environments. The second advantage is that, when the environment changes slightly (i.e., unexpected obstacles appearing), SG-RL does not need to reconstruct subgoal graphs and replan subgoal sequences using SSG, since LSPI can deal with uncertainties by exploiting its generalization ability to handle changes in environments. Simulation experiments in representative scenarios demonstrate that, compared with existing methods, SG-RL can work well on large-scale maps with relatively low action-switching frequencies and shorter path lengths, and SG-RL can deal with small changes in environments. We further demonstrate that the design of reward functions and the types of training environments are important factors for learning feasible policies.Comment: 20 page
    • …
    corecore