2,872 research outputs found

    Improving Relaxation-based Constrained Path Planning via Quadratic Programming

    Get PDF
    International audienceMany robotics tasks involve a set of constraints that limit the valid configurations the system can assume. Some of these constraints, such as loop-closure or orientation constraints to name some, can be described by a set of implicit functions which cause the valid Configuration Space of the robot to collapse to a lower-dimensional manifold. Sampling-based planners, which have been extensively studied in the last two decades, need some adaptations to work in this context. A proposed approach, known as relaxation, introduces constraint violation tolerances, thus approximating the manifold with a non-zero measure set. The problem can then be solved using classical approaches from the randomized planning literature. The relaxation needs however to be sufficiently high to allow planners to work in a reasonable amount of time, and violations are counterbalanced by controllers during actual motion. We present in this paper a new component for relaxation-based path planning under differentiable constraints. It exploits Quadratic Optimization to simultaneously move towards new samples and keep close to the constraint manifold. By properly guiding the exploration, both running time and constraint violation are substantially reduced

    A recursively feasible and convergent Sequential Convex Programming procedure to solve non-convex problems with linear equality constraints

    Get PDF
    A computationally efficient method to solve non-convex programming problems with linear equality constraints is presented. The proposed method is based on a recursively feasible and descending sequential convex programming procedure proven to converge to a locally optimal solution. Assuming that the first convex problem in the sequence is feasible, these properties are obtained by convexifying the non-convex cost and inequality constraints with inner-convex approximations. Additionally, a computationally efficient method is introduced to obtain inner-convex approximations based on Taylor series expansions. These Taylor-based inner-convex approximations provide the overall algorithm with a quadratic rate of convergence. The proposed method is capable of solving problems of practical interest in real-time. This is illustrated with a numerical simulation of an aerial vehicle trajectory optimization problem on commercial-of-the-shelf embedded computers

    Combining Model-Based and Model-Free Updates for Trajectory-Centric Reinforcement Learning

    Full text link
    Reinforcement learning (RL) algorithms for real-world robotic applications need a data-efficient learning process and the ability to handle complex, unknown dynamical systems. These requirements are handled well by model-based and model-free RL approaches, respectively. In this work, we aim to combine the advantages of these two types of methods in a principled manner. By focusing on time-varying linear-Gaussian policies, we enable a model-based algorithm based on the linear quadratic regulator (LQR) that can be integrated into the model-free framework of path integral policy improvement (PI2). We can further combine our method with guided policy search (GPS) to train arbitrary parameterized policies such as deep neural networks. Our simulation and real-world experiments demonstrate that this method can solve challenging manipulation tasks with comparable or better performance than model-free methods while maintaining the sample efficiency of model-based methods. A video presenting our results is available at https://sites.google.com/site/icml17pilqrComment: Paper accepted to the International Conference on Machine Learning (ICML) 201
    • …
    corecore