3 research outputs found

    On the performance of sampling-based optimal motion planners

    Get PDF
    Sampling based algorithms provide efficient methods of solving robot motion planning problem. The advantage of these approaches is the ease of their implementation and their computational efficiency. These algorithms are probabilistically complete i.e. they will find a solution if one exists, given a suitable run time. The drawback of sampling based planners is that there is no guarantee of the quality of their solutions. In fact, it was proven that their probability of reaching an optimal solution approaches zero. A breakthrough in sampling planning was the proposal of optimal based sampling planners. Current optimal planners are characterized with asymptotic optimality i.e. they reach an optimal solutions as time approaches infinity. Motivated by the slow convergence of optimal planners, post-processing and heuristic approach have been suggested. Due to the nature of the sampling based planners, their implementation requires tuning and selection of a large number of parameters that are often overlooked. This paper presents the performance study of an optimal planner under different parameters and heuristics. We also propose a modification in the algorithm to improve the convergence rate towards an optimal solution

    On the implementation of single-query sampling-based motion planners

    Full text link

    Biasing Samplers to Improve Motion Planning Performance

    No full text
    Abstract β€” With the success of randomized sampling-based motion planners such as Probabilistic Roadmap Methods, much work has been done to design new sampling techniques and distributions. To date, there is no sampling technique that out-performs all other techniques for all motion planning problems. Instead, each proposed technique has different strengths and weaknesses. However, little work has been done to combine these techniques to create new distributions. In this paper, we propose to bias one sampling distribution with another such that the resulting distribution out-performs either of its parent distributions. We present a general framework for biasing samplers that is easily extendable to new distributions and can handle an arbitrary number of parent distributions by chaining them together. Our experimental results show that by combining distributions, we can out-perform existing planners. Our results also indicate that not one single distribution combi-nation performs the best in all problems, and we identify which perform better for the specific application domains studied. I
    corecore