1,981 research outputs found

    Finite-horizon optimal control of linear and a class of nonlinear systems

    Get PDF
    Traditionally, optimal control of dynamical systems with known system dynamics is obtained in a backward-in-time and offline manner either by using Riccati or Hamilton-Jacobi-Bellman (HJB) equation. In contrast, in this dissertation, finite-horizon optimal regulation has been investigated for both linear and nonlinear systems in a forward-in-time manner when system dynamics are uncertain. Value and policy iterations are not used while the value function (or Q-function for linear systems) and control input are updated once a sampling interval consistent with standard adaptive control. First, the optimal adaptive control of linear discrete-time systems with unknown system dynamics is presented in Paper I by using Q-learning and Bellman equation while satisfying the terminal constraint. A novel update law that uses history information of the cost to go is derived. Paper II considers the design of the linear quadratic regulator in the presence of state and input quantization. Quantization errors are eliminated via a dynamic quantizer design and the parameter update law is redesigned from Paper I. Furthermore, an optimal adaptive state feedback controller is developed in Paper III for the general nonlinear discrete-time systems in affine form without the knowledge of system dynamics. In Paper IV, a NN-based observer is proposed to reconstruct the state vector and identify the dynamics so that the control scheme from Paper III is extended to output feedback. Finally, the optimal regulation of quantized nonlinear systems with input constraint is considered in Paper V by introducing a non-quadratic cost functional. Closed-loop stability is demonstrated for all the controller designs developed in this dissertation by using Lyapunov analysis while all the proposed schemes function in an online and forward-in-time manner so that they are practically viable --Abstract, page iv

    Episodic Learning with Control Lyapunov Functions for Uncertain Robotic Systems

    Get PDF
    Many modern nonlinear control methods aim to endow systems with guaranteed properties, such as stability or safety, and have been successfully applied to the domain of robotics. However, model uncertainty remains a persistent challenge, weakening theoretical guarantees and causing implementation failures on physical systems. This paper develops a machine learning framework centered around Control Lyapunov Functions (CLFs) to adapt to parametric uncertainty and unmodeled dynamics in general robotic systems. Our proposed method proceeds by iteratively updating estimates of Lyapunov function derivatives and improving controllers, ultimately yielding a stabilizing quadratic program model-based controller. We validate our approach on a planar Segway simulation, demonstrating substantial performance improvements by iteratively refining on a base model-free controller

    Lyapunov based optimal control of a class of nonlinear systems

    Get PDF
    Optimal control of nonlinear systems is in fact difficult since it requires the solution to the Hamilton-Jacobi-Bellman (HJB) equation which has no closed-form solution. In contrast to offline and/or online iterative schemes for optimal control, this dissertation in the form of five papers focuses on the design of iteration free, online optimal adaptive controllers for nonlinear discrete and continuous-time systems whose dynamics are completely or partially unknown even when the states not measurable. Thus, in Paper I, motivated by homogeneous charge compression ignition (HCCI) engine dynamics, a neural network-based infinite horizon robust optimal controller is introduced for uncertain nonaffine nonlinear discrete-time systems. First, the nonaffine system is transformed into an affine-like representation while the resulting higher order terms are mitigated by using a robust term. The optimal adaptive controller for the affinelike system solves HJB equation and identifies the system dynamics provided a target set point is given. Since it is difficult to define the set point a priori in Paper II, an extremum seeking control loop is designed while maximizing an uncertain output function. On the other hand, Paper III focuses on the infinite horizon online optimal tracking control of known nonlinear continuous-time systems in strict feedback form by using state and output feedback by relaxing the initial admissible controller requirement. Paper IV applies the optimal controller from Paper III to an underactuated helicopter attitude and position tracking problem. In Paper V, the optimal control of nonlinear continuous-time systems in strict feedback form from Paper III is revisited by using state and output feedback when the internal dynamics are unknown. Closed-loop stability is demonstrated for all the controller designs developed in this dissertation by using Lyapunov analysis --Abstract, page iv
    • …
    corecore