8,867 research outputs found

    Combining Subgoal Graphs with Reinforcement Learning to Build a Rational Pathfinder

    Full text link
    In this paper, we present a hierarchical path planning framework called SG-RL (subgoal graphs-reinforcement learning), to plan rational paths for agents maneuvering in continuous and uncertain environments. By "rational", we mean (1) efficient path planning to eliminate first-move lags; (2) collision-free and smooth for agents with kinematic constraints satisfied. SG-RL works in a two-level manner. At the first level, SG-RL uses a geometric path-planning method, i.e., Simple Subgoal Graphs (SSG), to efficiently find optimal abstract paths, also called subgoal sequences. At the second level, SG-RL uses an RL method, i.e., Least-Squares Policy Iteration (LSPI), to learn near-optimal motion-planning policies which can generate kinematically feasible and collision-free trajectories between adjacent subgoals. The first advantage of the proposed method is that SSG can solve the limitations of sparse reward and local minima trap for RL agents; thus, LSPI can be used to generate paths in complex environments. The second advantage is that, when the environment changes slightly (i.e., unexpected obstacles appearing), SG-RL does not need to reconstruct subgoal graphs and replan subgoal sequences using SSG, since LSPI can deal with uncertainties by exploiting its generalization ability to handle changes in environments. Simulation experiments in representative scenarios demonstrate that, compared with existing methods, SG-RL can work well on large-scale maps with relatively low action-switching frequencies and shorter path lengths, and SG-RL can deal with small changes in environments. We further demonstrate that the design of reward functions and the types of training environments are important factors for learning feasible policies.Comment: 20 page

    Asymptotically near-optimal RRT for fast, high-quality, motion planning

    Full text link
    We present Lower Bound Tree-RRT (LBT-RRT), a single-query sampling-based algorithm that is asymptotically near-optimal. Namely, the solution extracted from LBT-RRT converges to a solution that is within an approximation factor of 1+epsilon of the optimal solution. Our algorithm allows for a continuous interpolation between the fast RRT algorithm and the asymptotically optimal RRT* and RRG algorithms. When the approximation factor is 1 (i.e., no approximation is allowed), LBT-RRT behaves like RRG. When the approximation factor is unbounded, LBT-RRT behaves like RRT. In between, LBT-RRT is shown to produce paths that have higher quality than RRT would produce and run faster than RRT* would run. This is done by maintaining a tree which is a sub-graph of the RRG roadmap and a second, auxiliary graph, which we call the lower-bound graph. The combination of the two roadmaps, which is faster to maintain than the roadmap maintained by RRT*, efficiently guarantees asymptotic near-optimality. We suggest to use LBT-RRT for high-quality, anytime motion planning. We demonstrate the performance of the algorithm for scenarios ranging from 3 to 12 degrees of freedom and show that even for small approximation factors, the algorithm produces high-quality solutions (comparable to RRG and RRT*) with little running-time overhead when compared to RRT
    • …
    corecore