16,291 research outputs found

    Sampling-Based Methods for Factored Task and Motion Planning

    Full text link
    This paper presents a general-purpose formulation of a large class of discrete-time planning problems, with hybrid state and control-spaces, as factored transition systems. Factoring allows state transitions to be described as the intersection of several constraints each affecting a subset of the state and control variables. Robotic manipulation problems with many movable objects involve constraints that only affect several variables at a time and therefore exhibit large amounts of factoring. We develop a theoretical framework for solving factored transition systems with sampling-based algorithms. The framework characterizes conditions on the submanifold in which solutions lie, leading to a characterization of robust feasibility that incorporates dimensionality-reducing constraints. It then connects those conditions to corresponding conditional samplers that can be composed to produce values on this submanifold. We present two domain-independent, probabilistically complete planning algorithms that take, as input, a set of conditional samplers. We demonstrate the empirical efficiency of these algorithms on a set of challenging task and motion planning problems involving picking, placing, and pushing

    Generalizing Informed Sampling for Asymptotically Optimal Sampling-based Kinodynamic Planning via Markov Chain Monte Carlo

    Full text link
    Asymptotically-optimal motion planners such as RRT* have been shown to incrementally approximate the shortest path between start and goal states. Once an initial solution is found, their performance can be dramatically improved by restricting subsequent samples to regions of the state space that can potentially improve the current solution. When the motion planning problem lies in a Euclidean space, this region XinfX_{inf}, called the informed set, can be sampled directly. However, when planning with differential constraints in non-Euclidean state spaces, no analytic solutions exists to sampling XinfX_{inf} directly. State-of-the-art approaches to sampling XinfX_{inf} in such domains such as Hierarchical Rejection Sampling (HRS) may still be slow in high-dimensional state space. This may cause the planning algorithm to spend most of its time trying to produces samples in XinfX_{inf} rather than explore it. In this paper, we suggest an alternative approach to produce samples in the informed set XinfX_{inf} for a wide range of settings. Our main insight is to recast this problem as one of sampling uniformly within the sub-level-set of an implicit non-convex function. This recasting enables us to apply Monte Carlo sampling methods, used very effectively in the Machine Learning and Optimization communities, to solve our problem. We show for a wide range of scenarios that using our sampler can accelerate the convergence rate to high-quality solutions in high-dimensional problems

    Physics-based Motion Planning with Temporal Logic Specifications

    Get PDF
    One of the main foci of robotics is nowadays centered in providing a great degree of autonomy to robots. A fundamental step in this direction is to give them the ability to plan in discrete and continuous spaces to find the required motions to complete a complex task. In this line, some recent approaches describe tasks with Linear Temporal Logic (LTL) and reason on discrete actions to guide sampling-based motion planning, with the aim of finding dynamically-feasible motions that satisfy the temporal-logic task specifications. The present paper proposes an LTL planning approach enhanced with the use of ontologies to describe and reason about the task, on the one hand, and that includes physics-based motion planning to allow the purposeful manipulation of objects, on the other hand. The proposal has been implemented and is illustrated with didactic examples with a mobile robot in simple scenarios where some of the goals are occupied with objects that must be removed in order to fulfill the task.Comment: The 20th World Congress of the International Federation of Automatic Control, 9-14 July 201

    Batch Informed Trees (BIT*): Sampling-based Optimal Planning via the Heuristically Guided Search of Implicit Random Geometric Graphs

    Full text link
    In this paper, we present Batch Informed Trees (BIT*), a planning algorithm based on unifying graph- and sampling-based planning techniques. By recognizing that a set of samples describes an implicit random geometric graph (RGG), we are able to combine the efficient ordered nature of graph-based techniques, such as A*, with the anytime scalability of sampling-based algorithms, such as Rapidly-exploring Random Trees (RRT). BIT* uses a heuristic to efficiently search a series of increasingly dense implicit RGGs while reusing previous information. It can be viewed as an extension of incremental graph-search techniques, such as Lifelong Planning A* (LPA*), to continuous problem domains as well as a generalization of existing sampling-based optimal planners. It is shown that it is probabilistically complete and asymptotically optimal. We demonstrate the utility of BIT* on simulated random worlds in R2\mathbb{R}^2 and R8\mathbb{R}^8 and manipulation problems on CMU's HERB, a 14-DOF two-armed robot. On these problems, BIT* finds better solutions faster than RRT, RRT*, Informed RRT*, and Fast Marching Trees (FMT*) with faster anytime convergence towards the optimum, especially in high dimensions.Comment: 8 Pages. 6 Figures. Video available at http://www.youtube.com/watch?v=TQIoCC48gp

    PRM-RL: Long-range Robotic Navigation Tasks by Combining Reinforcement Learning and Sampling-based Planning

    Full text link
    We present PRM-RL, a hierarchical method for long-range navigation task completion that combines sampling based path planning with reinforcement learning (RL). The RL agents learn short-range, point-to-point navigation policies that capture robot dynamics and task constraints without knowledge of the large-scale topology. Next, the sampling-based planners provide roadmaps which connect robot configurations that can be successfully navigated by the RL agent. The same RL agents are used to control the robot under the direction of the planning, enabling long-range navigation. We use the Probabilistic Roadmaps (PRMs) for the sampling-based planner. The RL agents are constructed using feature-based and deep neural net policies in continuous state and action spaces. We evaluate PRM-RL, both in simulation and on-robot, on two navigation tasks with non-trivial robot dynamics: end-to-end differential drive indoor navigation in office environments, and aerial cargo delivery in urban environments with load displacement constraints. Our results show improvement in task completion over both RL agents on their own and traditional sampling-based planners. In the indoor navigation task, PRM-RL successfully completes up to 215 m long trajectories under noisy sensor conditions, and the aerial cargo delivery completes flights over 1000 m without violating the task constraints in an environment 63 million times larger than used in training.Comment: 9 pages, 7 figure
    • …
    corecore