24 research outputs found

    Pseudospectral Model Predictive Control under Partially Learned Dynamics

    Full text link
    Trajectory optimization of a controlled dynamical system is an essential part of autonomy, however many trajectory optimization techniques are limited by the fidelity of the underlying parametric model. In the field of robotics, a lack of model knowledge can be overcome with machine learning techniques, utilizing measurements to build a dynamical model from the data. This paper aims to take the middle ground between these two approaches by introducing a semi-parametric representation of the underlying system dynamics. Our goal is to leverage the considerable information contained in a traditional physics based model and combine it with a data-driven, non-parametric regression technique known as a Gaussian Process. Integrating this semi-parametric model with model predictive pseudospectral control, we demonstrate this technique on both a cart pole and quadrotor simulation with unmodeled damping and parametric error. In order to manage parametric uncertainty, we introduce an algorithm that utilizes Sparse Spectrum Gaussian Processes (SSGP) for online learning after each rollout. We implement this online learning technique on a cart pole and quadrator, then demonstrate the use of online learning and obstacle avoidance for the dubin vehicle dynamics.Comment: Accepted but withdrawn from AIAA Scitech 201

    Data-efficient learning of feedback policies from image pixels using deep dynamical models

    Get PDF
    Data-efficient reinforcement learning (RL) in continuous state-action spaces using very high-dimensional observations remains a key challenge in developing fully autonomous systems. We consider a particularly important instance of this challenge, the pixels-to-torques problem, where an RL agent learns a closed-loop control policy ( torques ) from pixel information only. We introduce a data-efficient, model-based reinforcement learning algorithm that learns such a closed-loop policy directly from pixel information. The key ingredient is a deep dynamical model for learning a low-dimensional feature embedding of images jointly with a predictive model in this low-dimensional feature space. Joint learning is crucial for long-term predictions, which lie at the core of the adaptive nonlinear model predictive control strategy that we use for closed-loop control. Compared to state-of-the-art RL methods for continuous states and actions, our approach learns quickly, scales to high-dimensional state spaces, is lightweight and an important step toward fully autonomous end-to-end learning from pixels to torques

    MBMF: Model-Based Priors for Model-Free Reinforcement Learning

    Full text link
    Reinforcement Learning is divided in two main paradigms: model-free and model-based. Each of these two paradigms has strengths and limitations, and has been successfully applied to real world domains that are appropriate to its corresponding strengths. In this paper, we present a new approach aimed at bridging the gap between these two paradigms. We aim to take the best of the two paradigms and combine them in an approach that is at the same time data-efficient and cost-savvy. We do so by learning a probabilistic dynamics model and leveraging it as a prior for the intertwined model-free optimization. As a result, our approach can exploit the generality and structure of the dynamics model, but is also capable of ignoring its inevitable inaccuracies, by directly incorporating the evidence provided by the direct observation of the cost. Preliminary results demonstrate that our approach outperforms purely model-based and model-free approaches, as well as the approach of simply switching from a model-based to a model-free setting.Comment: After we submitted the paper for consideration in CoRL 2017 we found a paper published in the recent past with a similar method (see related work for a discussion). Considering the similarities between the two papers, we have decided to retract our paper from CoRL 201

    Gaussian Process Model Predictive Control of An Unmanned Quadrotor

    Full text link
    The Model Predictive Control (MPC) trajectory tracking problem of an unmanned quadrotor with input and output constraints is addressed. In this article, the dynamic models of the quadrotor are obtained purely from operational data in the form of probabilistic Gaussian Process (GP) models. This is different from conventional models obtained through Newtonian analysis. A hierarchical control scheme is used to handle the trajectory tracking problem with the translational subsystem in the outer loop and the rotational subsystem in the inner loop. Constrained GP based MPC are formulated separately for both subsystems. The resulting MPC problems are typically nonlinear and non-convex. We derived 15 a GP based local dynamical model that allows these optimization problems to be relaxed to convex ones which can be efficiently solved with a simple active-set algorithm. The performance of the proposed approach is compared with an existing unconstrained Nonlinear Model Predictive Control (NMPC). Simulation results show that the two approaches exhibit similar trajectory tracking performance. However, our approach has the advantage of incorporating constraints on the control inputs. In addition, our approach only requires 20% of the computational time for NMPC.Comment: arXiv admin note: text overlap with arXiv:1612.0121
    corecore