4 research outputs found

    Automatic LQR Tuning Based on Gaussian Process Global Optimization

    Full text link
    This paper proposes an automatic controller tuning framework based on linear optimal control combined with Bayesian optimization. With this framework, an initial set of controller gains is automatically improved according to a pre-defined performance objective evaluated from experimental data. The underlying Bayesian optimization algorithm is Entropy Search, which represents the latent objective as a Gaussian process and constructs an explicit belief over the location of the objective minimum. This is used to maximize the information gain from each experimental evaluation. Thus, this framework shall yield improved controllers with fewer evaluations compared to alternative approaches. A seven-degree-of-freedom robot arm balancing an inverted pole is used as the experimental demonstrator. Results of a two- and four-dimensional tuning problems highlight the method's potential for automatic controller tuning on robotic platforms.Comment: 8 pages, 5 figures, to appear in IEEE 2016 International Conference on Robotics and Automation. Video demonstration of the experiments available at https://am.is.tuebingen.mpg.de/publications/marco_icra_201

    A revisit to inverse optimality of linear systems

    No full text
    In this article, we revisit the problem of inverse optimality for linear systems. By applying certain explicit formulae for coprime matrix fraction descriptions (CMFD) of linear systems, we propose a necessary and sufficient condition for a given stable state feedback law to be optimal for some quadratic performance index. Compared to existing results in the literature, the proposed condition is simpler to check and interpret. Moreover, it reduces the redundancy in the solutions of the associated algebraic Riccati equation (ARE). As a direct application of the proposed results, we consider the problem of inverse optimality of observer-based state feedback. To be specific, for the case where the state is not fully known, we consider the inverse optimality problem of an observer-based state feedback for the closed-loop system augmented by an observer. More precisely, it is shown that the observer-based state feedback is inverse optimal for the closed-loop system with some general forms of cost functions, only if the original state feedback is inverse optimal for the original system with certain cost functions, irrespective of the choice of the observer. This coincides with existing results in the literature. Some other applications of the proposed results are also discussed. We also illustrate the proposed results through an example
    corecore