36 research outputs found

    Model-based approaches for learning control from multi-modal data

    Get PDF
    Methods like deep reinforcement learning (DRL) have gained increasing attention when solving very general continuous control tasks in a model-free end-to-end fashion. However, there has been great difficulty in applying these algorithms to real-world systems due to poor sample efficiency and inability to handle state and control constraints. We introduce and demonstrate a general paradigm that combines model-learning and online planning for control which can also handle a wide range of problems using traditional and non-traditional sensor information. Rather than using popular RL methods, learning a model from data and performing online planning in the form of model predictive control (MPC) can be much more data-efficient and practical for deploying on real robotics systems. In addition to a generally applicable sample-based planning strategy, another specific formulation of model learning is investigated that allows for a linear structure to be exploited for efficient control. The algorithms are validated in both simulation and on real robotic platforms, namely an agriculture berry-picking robot using a soft-continuum arm. The model-based method is not only able to solve a challenging soft-body control task, but also can be deployed in a field setting where model-free RL is bottle-necked by data-efficiency
    corecore