47,739 research outputs found

    Batch Informed Trees (BIT*): Informed Asymptotically Optimal Anytime Search

    Full text link
    Path planning in robotics often requires finding high-quality solutions to continuously valued and/or high-dimensional problems. These problems are challenging and most planning algorithms instead solve simplified approximations. Popular approximations include graphs and random samples, as respectively used by informed graph-based searches and anytime sampling-based planners. Informed graph-based searches, such as A*, traditionally use heuristics to search a priori graphs in order of potential solution quality. This makes their search efficient but leaves their performance dependent on the chosen approximation. If its resolution is too low then they may not find a (suitable) solution but if it is too high then they may take a prohibitively long time to do so. Anytime sampling-based planners, such as RRT*, traditionally use random sampling to approximate the problem domain incrementally. This allows them to increase resolution until a suitable solution is found but makes their search dependent on the order of approximation. Arbitrary sequences of random samples approximate the problem domain in every direction simultaneously and but may be prohibitively inefficient at containing a solution. This paper unifies and extends these two approaches to develop Batch Informed Trees (BIT*), an informed, anytime sampling-based planner. BIT* solves continuous path planning problems efficiently by using sampling and heuristics to alternately approximate and search the problem domain. Its search is ordered by potential solution quality, as in A*, and its approximation improves indefinitely with additional computational time, as in RRT*. It is shown analytically to be almost-surely asymptotically optimal and experimentally to outperform existing sampling-based planners, especially on high-dimensional planning problems.Comment: International Journal of Robotics Research (IJRR). 32 Pages. 16 Figure

    Beyond Chance-Constrained Convex Mixed-Integer Optimization: A Generalized Calafiore-Campi Algorithm and the notion of SS-optimization

    Full text link
    The scenario approach developed by Calafiore and Campi to attack chance-constrained convex programs utilizes random sampling on the uncertainty parameter to substitute the original problem with a representative continuous convex optimization with NN convex constraints which is a relaxation of the original. Calafiore and Campi provided an explicit estimate on the size NN of the sampling relaxation to yield high-likelihood feasible solutions of the chance-constrained problem. They measured the probability of the original constraints to be violated by the random optimal solution from the relaxation of size NN. This paper has two main contributions. First, we present a generalization of the Calafiore-Campi results to both integer and mixed-integer variables. In fact, we demonstrate that their sampling estimates work naturally for variables restricted to some subset SS of Rd\mathbb R^d. The key elements are generalizations of Helly's theorem where the convex sets are required to intersect SRdS \subset \mathbb R^d. The size of samples in both algorithms will be directly determined by the SS-Helly numbers. Motivated by the first half of the paper, for any subset SRdS \subset \mathbb R^d, we introduce the notion of an SS-optimization problem, where the variables take on values over SS. It generalizes continuous, integer, and mixed-integer optimization. We illustrate with examples the expressive power of SS-optimization to capture sophisticated combinatorial optimization problems with difficult modular constraints. We reinforce the evidence that SS-optimization is "the right concept" by showing that the well-known randomized sampling algorithm of K. Clarkson for low-dimensional convex optimization problems can be extended to work with variables taking values over SS.Comment: 16 pages, 0 figures. This paper has been revised and split into two parts. This version is the second part of the original paper. The first part of the original paper is arXiv:1508.02380 (the original article contained 24 pages, 3 figures

    Batch Informed Trees (BIT*): Sampling-based Optimal Planning via the Heuristically Guided Search of Implicit Random Geometric Graphs

    Full text link
    In this paper, we present Batch Informed Trees (BIT*), a planning algorithm based on unifying graph- and sampling-based planning techniques. By recognizing that a set of samples describes an implicit random geometric graph (RGG), we are able to combine the efficient ordered nature of graph-based techniques, such as A*, with the anytime scalability of sampling-based algorithms, such as Rapidly-exploring Random Trees (RRT). BIT* uses a heuristic to efficiently search a series of increasingly dense implicit RGGs while reusing previous information. It can be viewed as an extension of incremental graph-search techniques, such as Lifelong Planning A* (LPA*), to continuous problem domains as well as a generalization of existing sampling-based optimal planners. It is shown that it is probabilistically complete and asymptotically optimal. We demonstrate the utility of BIT* on simulated random worlds in R2\mathbb{R}^2 and R8\mathbb{R}^8 and manipulation problems on CMU's HERB, a 14-DOF two-armed robot. On these problems, BIT* finds better solutions faster than RRT, RRT*, Informed RRT*, and Fast Marching Trees (FMT*) with faster anytime convergence towards the optimum, especially in high dimensions.Comment: 8 Pages. 6 Figures. Video available at http://www.youtube.com/watch?v=TQIoCC48gp

    Incremental Sampling-based Algorithms for Optimal Motion Planning

    Full text link
    During the last decade, incremental sampling-based motion planning algorithms, such as the Rapidly-exploring Random Trees (RRTs) have been shown to work well in practice and to possess theoretical guarantees such as probabilistic completeness. However, no theoretical bounds on the quality of the solution obtained by these algorithms have been established so far. The first contribution of this paper is a negative result: it is proven that, under mild technical conditions, the cost of the best path in the RRT converges almost surely to a non-optimal value. Second, a new algorithm is considered, called the Rapidly-exploring Random Graph (RRG), and it is shown that the cost of the best path in the RRG converges to the optimum almost surely. Third, a tree version of RRG is introduced, called the RRT^* algorithm, which preserves the asymptotic optimality of RRG while maintaining a tree structure like RRT. The analysis of the new algorithms hinges on novel connections between sampling-based motion planning algorithms and the theory of random geometric graphs. In terms of computational complexity, it is shown that the number of simple operations required by both the RRG and RRT^* algorithms is asymptotically within a constant factor of that required by RRT.Comment: 20 pages, 10 figures, this manuscript is submitted to the International Journal of Robotics Research, a short version is to appear at the 2010 Robotics: Science and Systems Conference

    OneMax in Black-Box Models with Several Restrictions

    Full text link
    Black-box complexity studies lower bounds for the efficiency of general-purpose black-box optimization algorithms such as evolutionary algorithms and other search heuristics. Different models exist, each one being designed to analyze a different aspect of typical heuristics such as the memory size or the variation operators in use. While most of the previous works focus on one particular such aspect, we consider in this work how the combination of several algorithmic restrictions influence the black-box complexity. Our testbed are so-called OneMax functions, a classical set of test functions that is intimately related to classic coin-weighing problems and to the board game Mastermind. We analyze in particular the combined memory-restricted ranking-based black-box complexity of OneMax for different memory sizes. While its isolated memory-restricted as well as its ranking-based black-box complexity for bit strings of length nn is only of order n/lognn/\log n, the combined model does not allow for algorithms being faster than linear in nn, as can be seen by standard information-theoretic considerations. We show that this linear bound is indeed asymptotically tight. Similar results are obtained for other memory- and offspring-sizes. Our results also apply to the (Monte Carlo) complexity of OneMax in the recently introduced elitist model, in which only the best-so-far solution can be kept in the memory. Finally, we also provide improved lower bounds for the complexity of OneMax in the regarded models. Our result enlivens the quest for natural evolutionary algorithms optimizing OneMax in o(nlogn)o(n \log n) iterations.Comment: This is the full version of a paper accepted to GECCO 201

    Importance mixing: Improving sample reuse in evolutionary policy search methods

    Full text link
    Deep neuroevolution, that is evolutionary policy search methods based on deep neural networks, have recently emerged as a competitor to deep reinforcement learning algorithms due to their better parallelization capabilities. However, these methods still suffer from a far worse sample efficiency. In this paper we investigate whether a mechanism known as "importance mixing" can significantly improve their sample efficiency. We provide a didactic presentation of importance mixing and we explain how it can be extended to reuse more samples. Then, from an empirical comparison based on a simple benchmark, we show that, though it actually provides better sample efficiency, it is still far from the sample efficiency of deep reinforcement learning, though it is more stable

    Informed RRT*: Optimal Sampling-based Path Planning Focused via Direct Sampling of an Admissible Ellipsoidal Heuristic

    Full text link
    Rapidly-exploring random trees (RRTs) are popular in motion planning because they find solutions efficiently to single-query problems. Optimal RRTs (RRT*s) extend RRTs to the problem of finding the optimal solution, but in doing so asymptotically find the optimal path from the initial state to every state in the planning domain. This behaviour is not only inefficient but also inconsistent with their single-query nature. For problems seeking to minimize path length, the subset of states that can improve a solution can be described by a prolate hyperspheroid. We show that unless this subset is sampled directly, the probability of improving a solution becomes arbitrarily small in large worlds or high state dimensions. In this paper, we present an exact method to focus the search by directly sampling this subset. The advantages of the presented sampling technique are demonstrated with a new algorithm, Informed RRT*. This method retains the same probabilistic guarantees on completeness and optimality as RRT* while improving the convergence rate and final solution quality. We present the algorithm as a simple modification to RRT* that could be further extended by more advanced path-planning algorithms. We show experimentally that it outperforms RRT* in rate of convergence, final solution cost, and ability to find difficult passages while demonstrating less dependence on the state dimension and range of the planning problem.Comment: 8 pages, 11 figures. Videos available at https://www.youtube.com/watch?v=d7dX5MvDYTc and https://www.youtube.com/watch?v=nsl-5MZfwu
    corecore