26 research outputs found

    Maximum Likelihood Methods for Inverse Learning of Optimal Controllers

    Full text link
    This paper presents a framework for inverse learning of objective functions for constrained optimal control problems, which is based on the Karush-Kuhn-Tucker (KKT) conditions. We discuss three variants corresponding to different model assumptions and computational complexities. The first method uses a convex relaxation of the KKT conditions and serves as the benchmark. The main contribution of this paper is the proposition of two learning methods that combine the KKT conditions with maximum likelihood estimation. The key benefit of this combination is the systematic treatment of constraints for learning from noisy data with a branch-and-bound algorithm using likelihood arguments. This paper discusses theoretic properties of the learning methods and presents simulation results that highlight the advantages of using the maximum likelihood formulation for learning objective functions.Comment: 21st IFAC World Congres

    Individual Human Behavior Identification Using an Inverse Reinforcement Learning Method

    Get PDF
    Shared control techniques have a great potential to create synergies in human-machine interaction for efficient and safe applications. However, an optimal interaction requires the machine to consider the individual behavior of the human partner. A widespread approach for modeling human behavior is given by optimal control theory, where the movement trajectories of a human arise from an optimized cost function. The aim of the identification is thus to determine parameters of a cost function which explains observed human motion. The central thesis of this paper is that individual cost function parameters which describe specific behavior can be determined by means of Inverse Reinforcement Learning. We show the applicability of the approach with a tracking control task example. The experiment consists in following a reference trajectory by means of a steering wheel. The study confirms that optimal control is suitable for modeling individual human behavior and demonstrates the suitability of Inverse Reinforcement Learning in order to determine the cost function parameters which explain measured data

    Model-Based Inverse Reinforcement Learning from Visual Demonstrations

    Full text link
    Scaling model-based inverse reinforcement learning (IRL) to real robotic manipulation tasks with unknown dynamics remains an open problem. The key challenges lie in learning good dynamics models, developing algorithms that scale to high-dimensional state-spaces and being able to learn from both visual and proprioceptive demonstrations. In this work, we present a gradient-based inverse reinforcement learning framework that utilizes a pre-trained visual dynamics model to learn cost functions when given only visual human demonstrations. The learned cost functions are then used to reproduce the demonstrated behavior via visual model predictive control. We evaluate our framework on hardware on two basic object manipulation tasks.Comment: Accepted at the 4th Conference on Robotic Learning (CoRL 2020), Cambridge MA, US

    Constrained Inverse Optimal Control with Application to a Human Manipulation Task

    Full text link
    This paper presents an inverse optimal control methodology and its application to training a predictive model of human motor control from a manipulation task. It introduces a convex formulation for learning both objective function and constraints of an infinite-horizon constrained optimal control problem with nonlinear system dynamics. The inverse approach utilizes Bellman's principle of optimality to formulate the infinite-horizon optimal control problem as a shortest path problem and Lagrange multipliers to identify constraints. We highlight the key benefit of using the shortest path formulation, i.e., the possibility of training the predictive model with short and selected trajectory segments. The method is applied to training a predictive model of movements of a human subject from a manipulation task. The study indicates that individual human movements can be predicted with low error using an infinite-horizon optimal control problem with constraints on shoulder movement
    corecore