2,268 research outputs found

    A simple solution to the finite-horizon LQ problem with zero terminal state.

    Get PDF

    Constrained Finite Receding Horizon Linear Quadratic Control

    Get PDF
    Issues of feasibility, stability and performance are considered for a finite horizon formulation of receding horizon control (RHC) for linear systems under mixed linear state and control constraints. It is shown that for a sufficiently long horizon, a receding horizon policy will remain feasible and result in stability, even when no end constraint is imposed. In addition, offline finite horizon calculations can be used to determine not only a stabilizing horizon length, but guaranteed performance bounds for the receding horizon policy. These calculations are demonstrated on two examples

    A reduction technique for Generalised Riccati Difference Equations

    Full text link
    This paper proposes a reduction technique for the generalised Riccati difference equation arising in optimal control and optimal filtering. This technique relies on a study on the generalised discrete algebraic Riccati equation. In particular, an analysis on the eigen- structure of the corresponding extended symplectic pencil enables to identify a subspace in which all the solutions of the generalised discrete algebraic Riccati equation are coin- cident. This subspace is the key to derive a decomposition technique for the generalised Riccati difference equation that isolates its nilpotent part, which becomes constant in a number of steps equal to the nilpotency index of the closed-loop, from another part that can be computed by iterating a reduced-order generalised Riccati difference equation

    The turnpike property in finite-dimensional nonlinear optimal control

    Get PDF
    Turnpike properties have been established long time ago in finite-dimensional optimal control problems arising in econometry. They refer to the fact that, under quite general assumptions, the optimal solutions of a given optimal control problem settled in large time consist approximately of three pieces, the first and the last of which being transient short-time arcs, and the middle piece being a long-time arc staying exponentially close to the optimal steady-state solution of an associated static optimal control problem. We provide in this paper a general version of a turnpike theorem, valuable for nonlinear dynamics without any specific assumption, and for very general terminal conditions. Not only the optimal trajectory is shown to remain exponentially close to a steady-state, but also the corresponding adjoint vector of the Pontryagin maximum principle. The exponential closedness is quantified with the use of appropriate normal forms of Riccati equations. We show then how the property on the adjoint vector can be adequately used in order to initialize successfully a numerical direct method, or a shooting method. In particular, we provide an appropriate variant of the usual shooting method in which we initialize the adjoint vector, not at the initial time, but at the middle of the trajectory

    The extended symplectic pencil and the finite-horizon LQ problem with two-sided boundary conditions

    Get PDF
    This note introduces a new analytic approach to the solution of a very general class of finite-horizon optimal control problems formulated for discrete-time systems. This approach provides a parametric expression for the optimal control sequences, as well as the corresponding optimal state trajectories, by exploiting a new decomposition of the so-called extended symplectic pencil. Importantly, the results established in this paper hold under assumptions that are weaker than the ones considered in the literature so far. Indeed, this approach does not require neither the regularity of the symplectic pencil, nor the modulus controllability of the underlying system. In the development of the approach presented in this paper, several ancillary results of independent interest on generalised Riccati equations and on the eigenstructure of the extended symplectic pencil will also be presented

    A Family of Iterative Gauss-Newton Shooting Methods for Nonlinear Optimal Control

    Full text link
    This paper introduces a family of iterative algorithms for unconstrained nonlinear optimal control. We generalize the well-known iLQR algorithm to different multiple-shooting variants, combining advantages like straight-forward initialization and a closed-loop forward integration. All algorithms have similar computational complexity, i.e. linear complexity in the time horizon, and can be derived in the same computational framework. We compare the full-step variants of our algorithms and present several simulation examples, including a high-dimensional underactuated robot subject to contact switches. Simulation results show that our multiple-shooting algorithms can achieve faster convergence, better local contraction rates and much shorter runtimes than classical iLQR, which makes them a superior choice for nonlinear model predictive control applications.Comment: 8 page
    • …
    corecore