66 research outputs found
Batch Informed Trees (BIT*): Sampling-based Optimal Planning via the Heuristically Guided Search of Implicit Random Geometric Graphs
In this paper, we present Batch Informed Trees (BIT*), a planning algorithm
based on unifying graph- and sampling-based planning techniques. By recognizing
that a set of samples describes an implicit random geometric graph (RGG), we
are able to combine the efficient ordered nature of graph-based techniques,
such as A*, with the anytime scalability of sampling-based algorithms, such as
Rapidly-exploring Random Trees (RRT).
BIT* uses a heuristic to efficiently search a series of increasingly dense
implicit RGGs while reusing previous information. It can be viewed as an
extension of incremental graph-search techniques, such as Lifelong Planning A*
(LPA*), to continuous problem domains as well as a generalization of existing
sampling-based optimal planners. It is shown that it is probabilistically
complete and asymptotically optimal.
We demonstrate the utility of BIT* on simulated random worlds in
and and manipulation problems on CMU's HERB, a
14-DOF two-armed robot. On these problems, BIT* finds better solutions faster
than RRT, RRT*, Informed RRT*, and Fast Marching Trees (FMT*) with faster
anytime convergence towards the optimum, especially in high dimensions.Comment: 8 Pages. 6 Figures. Video available at
http://www.youtube.com/watch?v=TQIoCC48gp
Landmark Guided Probabilistic Roadmap Queries
A landmark based heuristic is investigated for reducing query phase run-time
of the probabilistic roadmap (\PRM) motion planning method. The heuristic is
generated by storing minimum spanning trees from a small number of vertices
within the \PRM graph and using these trees to approximate the cost of a
shortest path between any two vertices of the graph. The intermediate step of
preprocessing the graph increases the time and memory requirements of the
classical motion planning technique in exchange for speeding up individual
queries making the method advantageous in multi-query applications. This paper
investigates these trade-offs on \PRM graphs constructed in randomized
environments as well as a practical manipulator simulation.We conclude that the
method is preferable to Dijkstra's algorithm or the algorithm with
conventional heuristics in multi-query applications.Comment: 7 Page
Probabilistic completeness of RRT for geometric and kinodynamic planning with forward propagation
The Rapidly-exploring Random Tree (RRT) algorithm has been one of the most
prevalent and popular motion-planning techniques for two decades now.
Surprisingly, in spite of its centrality, there has been an active debate under
which conditions RRT is probabilistically complete. We provide two new proofs
of probabilistic completeness (PC) of RRT with a reduced set of assumptions.
The first one for the purely geometric setting, where we only require that the
solution path has a certain clearance from the obstacles. For the kinodynamic
case with forward propagation of random controls and duration, we only consider
in addition mild Lipschitz-continuity conditions. These proofs fill a gap in
the study of RRT itself. They also lay sound foundations for a variety of more
recent and alternative sampling-based methods, whose PC property relies on that
of RRT
Generalizing Informed Sampling for Asymptotically Optimal Sampling-based Kinodynamic Planning via Markov Chain Monte Carlo
Asymptotically-optimal motion planners such as RRT* have been shown to
incrementally approximate the shortest path between start and goal states. Once
an initial solution is found, their performance can be dramatically improved by
restricting subsequent samples to regions of the state space that can
potentially improve the current solution. When the motion planning problem lies
in a Euclidean space, this region , called the informed set, can be
sampled directly. However, when planning with differential constraints in
non-Euclidean state spaces, no analytic solutions exists to sampling
directly.
State-of-the-art approaches to sampling in such domains such as
Hierarchical Rejection Sampling (HRS) may still be slow in high-dimensional
state space. This may cause the planning algorithm to spend most of its time
trying to produces samples in rather than explore it. In this paper,
we suggest an alternative approach to produce samples in the informed set
for a wide range of settings. Our main insight is to recast this
problem as one of sampling uniformly within the sub-level-set of an implicit
non-convex function. This recasting enables us to apply Monte Carlo sampling
methods, used very effectively in the Machine Learning and Optimization
communities, to solve our problem. We show for a wide range of scenarios that
using our sampler can accelerate the convergence rate to high-quality solutions
in high-dimensional problems
- …