5,528 research outputs found

    Sampling-based Algorithms for Optimal Motion Planning

    Get PDF
    During the last decade, sampling-based path planning algorithms, such as Probabilistic RoadMaps (PRM) and Rapidly-exploring Random Trees (RRT), have been shown to work well in practice and possess theoretical guarantees such as probabilistic completeness. However, little effort has been devoted to the formal analysis of the quality of the solution returned by such algorithms, e.g., as a function of the number of samples. The purpose of this paper is to fill this gap, by rigorously analyzing the asymptotic behavior of the cost of the solution returned by stochastic sampling-based algorithms as the number of samples increases. A number of negative results are provided, characterizing existing algorithms, e.g., showing that, under mild technical conditions, the cost of the solution returned by broadly used sampling-based algorithms converges almost surely to a non-optimal value. The main contribution of the paper is the introduction of new algorithms, namely, PRM* and RRT*, which are provably asymptotically optimal, i.e., such that the cost of the returned solution converges almost surely to the optimum. Moreover, it is shown that the computational complexity of the new algorithms is within a constant factor of that of their probabilistically complete (but not asymptotically optimal) counterparts. The analysis in this paper hinges on novel connections between stochastic sampling-based path planning algorithms and the theory of random geometric graphs.Comment: 76 pages, 26 figures, to appear in International Journal of Robotics Researc

    Incremental Sampling-based Algorithms for Optimal Motion Planning

    Full text link
    During the last decade, incremental sampling-based motion planning algorithms, such as the Rapidly-exploring Random Trees (RRTs) have been shown to work well in practice and to possess theoretical guarantees such as probabilistic completeness. However, no theoretical bounds on the quality of the solution obtained by these algorithms have been established so far. The first contribution of this paper is a negative result: it is proven that, under mild technical conditions, the cost of the best path in the RRT converges almost surely to a non-optimal value. Second, a new algorithm is considered, called the Rapidly-exploring Random Graph (RRG), and it is shown that the cost of the best path in the RRG converges to the optimum almost surely. Third, a tree version of RRG is introduced, called the RRT^* algorithm, which preserves the asymptotic optimality of RRG while maintaining a tree structure like RRT. The analysis of the new algorithms hinges on novel connections between sampling-based motion planning algorithms and the theory of random geometric graphs. In terms of computational complexity, it is shown that the number of simple operations required by both the RRG and RRT^* algorithms is asymptotically within a constant factor of that required by RRT.Comment: 20 pages, 10 figures, this manuscript is submitted to the International Journal of Robotics Research, a short version is to appear at the 2010 Robotics: Science and Systems Conference

    PointMap: A real-time memory-based learning system with on-line and post-training pruning

    Full text link
    Also published in the International Journal of Hybrid Intelligent Systems, Volume 1, January, 2004A memory-based learning system called PointMap is a simple and computationally efficient extension of Condensed Nearest Neighbor that allows the user to limit the number of exemplars stored during incremental learning. PointMap evaluates the information value of coding nodes during training, and uses this index to prune uninformative nodes either on-line or after training. These pruning methods allow the user to control both a priori code size and sensitivity to detail in the training data, as well as to determine the code size necessary for accurate performance on a given data set. Coding and pruning computations are local in space, with only the nearest coded neighbor available for comparison with the input; and in time, with only the current input available during coding. Pruning helps solve common problems of traditional memory-based learning systems: large memory requirements, their accompanying slow on-line computations, and sensitivity to noise. PointMap copes with the curse of dimensionality by considering multiple nearest neighbors during testing without increasing the complexity of the training process or the stored code. The performance of PointMap is compared to that of a group of sixteen nearest-neighbor systems on benchmark problems.This research was supported by grants from the Air Force Office of Scientific Research (AFOSR F49620-98-l-0108, F49620-0l-l-0397, and F49620-0l-l-0423) and the Office of Naval Research (ONR N00014-0l-l-0624)

    Searching for a trail of evidence in a maze

    Full text link
    Consider a graph with a set of vertices and oriented edges connecting pairs of vertices. Each vertex is associated with a random variable and these are assumed to be independent. In this setting, suppose we wish to solve the following hypothesis testing problem: under the null, the random variables have common distribution N(0,1) while under the alternative, there is an unknown path along which random variables have distribution N(μ,1)N(\mu,1), μ>0\mu> 0, and distribution N(0,1) away from it. For which values of the mean shift μ\mu can one reliably detect and for which values is this impossible? Consider, for example, the usual regular lattice with vertices of the form {(i,j):0i,ijiandjhastheparityofi}\{(i,j):0\le i,-i\le j\le i and j has the parity of i\} and oriented edges (i,j)(i+1,j+s)(i,j)\to (i+1,j+s), where s=±1s=\pm1. We show that for paths of length mm starting at the origin, the hypotheses become distinguishable (in a minimax sense) if μm1/logm\mu_m\gg1/\sqrt{\log m}, while they are not if μm1/logm\mu_m\ll1/\log m. We derive equivalent results in a Bayesian setting where one assumes that all paths are equally likely; there, the asymptotic threshold is μmm1/4\mu_m\approx m^{-1/4}. We obtain corresponding results for trees (where the threshold is of order 1 and independent of the size of the tree), for distributions other than the Gaussian and for other graphs. The concept of the predictability profile, first introduced by Benjamini, Pemantle and Peres, plays a crucial role in our analysis.Comment: Published in at http://dx.doi.org/10.1214/07-AOS526 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Self-Improving Algorithms

    Full text link
    We investigate ways in which an algorithm can improve its expected performance by fine-tuning itself automatically with respect to an unknown input distribution D. We assume here that D is of product type. More precisely, suppose that we need to process a sequence I_1, I_2, ... of inputs I = (x_1, x_2, ..., x_n) of some fixed length n, where each x_i is drawn independently from some arbitrary, unknown distribution D_i. The goal is to design an algorithm for these inputs so that eventually the expected running time will be optimal for the input distribution D = D_1 * D_2 * ... * D_n. We give such self-improving algorithms for two problems: (i) sorting a sequence of numbers and (ii) computing the Delaunay triangulation of a planar point set. Both algorithms achieve optimal expected limiting complexity. The algorithms begin with a training phase during which they collect information about the input distribution, followed by a stationary regime in which the algorithms settle to their optimized incarnations.Comment: 26 pages, 8 figures, preliminary versions appeared at SODA 2006 and SoCG 2008. Thorough revision to improve the presentation of the pape
    corecore