50 research outputs found

    Streamlining of the state-dependent Riccati equation controller algorithm for an embedded implementation

    Get PDF
    In many practical control problems the dynamics of the plant to be controlled are nonlinear. However, in most cases the controller design is based on a linear approximation of the dynamics. One of the reasons for this is that, in general, nonlinear control design methods are difficult to apply to practical problems. The State Dependent Riccati Equation (SDRE) control approach is a relatively new practical approach to nonlinear control that has the simplicity of the classical Linear Quadratic control method. This approach has been recently applied to control experimental autonomous air vehicles with relative success. To make the SDRE approach practical in applications where the computational resources are limited and where the dynamic models are more complex it would be necessary to re-examine and streamline this control algorithm. The main objective of this work is to identify improvements that can be made to the implementation of the SDRE algorithm to improve its performance. This is accomplished by analyzing the structure of the algorithm and the underlying functions used to implement it. At the core of the SDRE algorithm is the solution, in real time, of an Algebraic Riccati Equation. The impact of the selection of a suitable algorithm to solve the Riccati Equation is analyzed. Three different algorithms were studied. Experimental results indicate that the Kleinman algorithm performs better than two other algorithms based on Newton’s method. This work also demonstrates that appropriately setting a maximum number of iterations for the Kleinman approach can improve the overall system performance without degrading accuracy significantly. Finally, a software implementation of the SDRE algorithm was developed and benchmarked to study the potential performance improvements of a hardware implementation. The test plant was an inverted pendulum simulation based on experimental hardware. Bottlenecks in the software implementation were identified and a possible hardware design to remove one such bottleneck was developed

    Suboptimal Stabilization of Unknown Nonlinear Systems via Extended State Observers

    Full text link
    This paper introduces a locally optimal stabilizer for multi-input muti-output autonomous nonlinear systems of any order with totally unknown dynamics. The control scheme proposed in this paper lies at the intersection of the active disturbance rejection control (ADRC) and the state-dependent Riccati equation (SDRE) technique. It is shown that using an extended state observer (ESO), a state-dependent coefficient matrix for the nonlinear system is obtainable which is used by the SDRE technique to construct a SDRE+ESO controller. As the SDRE technique is not guaranteed to be globally asymptotically stable, for systems with known linearization at the equilibrium, an algorithmic method is proposed for an approximated estimation of its region of attraction (ROA). Then, it is shown that the global asymptotic stability is achievable using a switching controller constructed by the SDRE+ESO method and ADRC for inside and outside the estimated ROA, respectively.Comment: 6 pages, 1 figur

    A Galerkin Method for Large-scale Autonomous Differential Riccati Equations based on the Loewner Partial Order

    Get PDF

    Linear-like policy iteration based optimal control for continuous-time nonlinear systems

    Get PDF
    We propose a novel strategy to construct optimal controllers for continuous-time nonlinear systems by means of linear-like techniques, provided that the optimal value function is differentiable and quadratic-like . This assumption covers a wide range of cases and holds locally around an equilibrium under mild assumptions. The proposed strategy does not require solving the Hamilton-Jacobi-Bellman equation, that is a nonlinear partial differential equation, which is known to be hard or impossible to solve. Instead, the Hamilton-Jacobi-Bellman equation is replaced with an easy-solvable state-dependent Lyapunov matrix equation. We exploit a linear-like factorization of the underlying nonlinear system and a policy-iteration algorithm to yield a linear-like policy-iteration for nonlinear systems. The proposed control strategy solves optimal nonlinear control problems in an asymptotically exact, yet still linear-like manner. We prove optimality of the resulting solution and illustrate the results via four examples
    corecore