304 research outputs found

    The generalised discrete algebraic Riccati equation in linear-quadratic optimal control

    Get PDF
    This paper investigates the properties of the solutions of the generalised discrete algebraic Riccati equation arising from the classic infinite-horizon linear quadratic (LQ) control problem. In particular, a geometric analysis is used to study the relationship existing between the solutions of the generalised Riccati equation and the output-nulling subspaces of the underlying system and the corresponding reachability subspaces. This analysis reveals the presence of a subspace that plays an important role in the solution of the related optimal control problem, which is reflected in the generalised eigenstructure of the corresponding extended symplectic pencil. In establishing the main results of this paper, several ancillary problems on the discrete Lyapunov equation and spectral factorisation are also addressed and solved

    The extended symplectic pencil and the finite-horizon LQ problem with two-sided boundary conditions

    Get PDF
    This note introduces a new analytic approach to the solution of a very general class of finite-horizon optimal control problems formulated for discrete-time systems. This approach provides a parametric expression for the optimal control sequences, as well as the corresponding optimal state trajectories, by exploiting a new decomposition of the so-called extended symplectic pencil. Importantly, the results established in this paper hold under assumptions that are weaker than the ones considered in the literature so far. Indeed, this approach does not require neither the regularity of the symplectic pencil, nor the modulus controllability of the underlying system. In the development of the approach presented in this paper, several ancillary results of independent interest on generalised Riccati equations and on the eigenstructure of the extended symplectic pencil will also be presented

    Non-linear predictive control for manufacturing and robotic applications

    Get PDF
    The paper discusses predictive control algorithms in the context of applications to robotics and manufacturing systems. Special features of such systems, as compared to traditional process control applications, require that the algorithms are capable of dealing with faster dynamics, more significant unstabilities and more significant contribution of non-linearities to the system performance. The paper presents the general framework for state-space design of predictive algorithms. Linear algorithms are introduced first, then, the attention moves to non-linear systems. Methods of predictive control are presented which are based on the state-dependent state space system description. Those are illustrated on examples of rather difficult mechanical systems

    Discrete-time optimal preview control

    No full text
    There are many situations in which one can preview future reference signals, or future disturbances. Optimal Preview Control is concerned with designing controllers which use this preview to improve closed-loop performance. In this thesis a general preview control problem is presented which includes previewable disturbances, dynamic weighting functions, output feedback and nonpreviewable disturbances. It is then shown how a variety of problems may be cast as special cases of this general problem; of particular interest is the robust preview tracking problem and the problem of disturbance rejection with uncertainty in the previewed signal. . (', The general preview problem is solved in both the Fh and Beo settings. The H2 solution is a relatively straightforward extension ofpreviously known results, however, our contribution is to provide a single framework that may be used as a reference work when tackling a variety of preview problems. We also provide some new analysis concerning the maximum possible reduction in closed-loop H2 norm which accrues from the addition of preview action. / Name of candidate: Title of thesis: I DESCRIPTION OF THESIS Andrew Hazell Discrete-Time Optimal Preview Control The solution to the Hoo problem involves a completely new approach to Hoo preview control, in which the structure of the associated Riccati equation is exploited in order to find an efficient algorithm for computing the optimal controller. The problem tackled here is also more generic than those previously appearing in the literature. The above theory finds obvious applications in the design of controllers for autonomous vehicles, however, a particular class of nonlinearities found in typical vehicle models presents additional problems. The final chapters are concerned with a generic framework for implementing vehicle preview controllers, and also a'case study on preview control of a bicycle.Imperial Users onl

    Iterative and doubling algorithms for Riccati-type matrix equations: a comparative introduction

    Full text link
    We review a family of algorithms for Lyapunov- and Riccati-type equations which are all related to each other by the idea of \emph{doubling}: they construct the iterate Qk=X2kQ_k = X_{2^k} of another naturally-arising fixed-point iteration (Xh)(X_h) via a sort of repeated squaring. The equations we consider are Stein equations X−A∗XA=QX - A^*XA=Q, Lyapunov equations A∗X+XA+Q=0A^*X+XA+Q=0, discrete-time algebraic Riccati equations X=Q+A∗X(I+GX)−1AX=Q+A^*X(I+GX)^{-1}A, continuous-time algebraic Riccati equations Q+A∗X+XA−XGX=0Q+A^*X+XA-XGX=0, palindromic quadratic matrix equations A+QY+A∗Y2=0A+QY+A^*Y^2=0, and nonlinear matrix equations X+A∗X−1A=QX+A^*X^{-1}A=Q. We draw comparisons among these algorithms, highlight the connections between them and to other algorithms such as subspace iteration, and discuss open issues in their theory.Comment: Review article for GAMM Mitteilunge

    Online optimal and adaptive integral tracking control for varying discrete‐time systems using reinforcement learning

    Get PDF
    Conventional closed‐form solution to the optimal control problem using optimal control theory is only available under the assumption that there are known system dynamics/models described as differential equations. Without such models, reinforcement learning (RL) as a candidate technique has been successfully applied to iteratively solve the optimal control problem for unknown or varying systems. For the optimal tracking control problem, existing RL techniques in the literature assume either the use of a predetermined feedforward input for the tracking control, restrictive assumptions on the reference model dynamics, or discounted tracking costs. Furthermore, by using discounted tracking costs, zero steady‐state error cannot be guaranteed by the existing RL methods. This article therefore presents an optimal online RL tracking control framework for discrete‐time (DT) systems, which does not impose any restrictive assumptions of the existing methods and equally guarantees zero steady‐state tracking error. This is achieved by augmenting the original system dynamics with the integral of the error between the reference inputs and the tracked outputs for use in the online RL framework. It is further shown that the resulting value function for the DT linear quadratic tracker using the augmented formulation with integral control is also quadratic. This enables the development of Bellman equations, which use only the system measurements to solve the corresponding DT algebraic Riccati equation and obtain the optimal tracking control inputs online. Two RL strategies are thereafter proposed based on both the value function approximation and the Q‐learning along with bounds on excitation for the convergence of the parameter estimates. Simulation case studies show the effectiveness of the proposed approach
    • 

    corecore