1,661 research outputs found

    Lyapunov based optimal control of a class of nonlinear systems

    Get PDF
    Optimal control of nonlinear systems is in fact difficult since it requires the solution to the Hamilton-Jacobi-Bellman (HJB) equation which has no closed-form solution. In contrast to offline and/or online iterative schemes for optimal control, this dissertation in the form of five papers focuses on the design of iteration free, online optimal adaptive controllers for nonlinear discrete and continuous-time systems whose dynamics are completely or partially unknown even when the states not measurable. Thus, in Paper I, motivated by homogeneous charge compression ignition (HCCI) engine dynamics, a neural network-based infinite horizon robust optimal controller is introduced for uncertain nonaffine nonlinear discrete-time systems. First, the nonaffine system is transformed into an affine-like representation while the resulting higher order terms are mitigated by using a robust term. The optimal adaptive controller for the affinelike system solves HJB equation and identifies the system dynamics provided a target set point is given. Since it is difficult to define the set point a priori in Paper II, an extremum seeking control loop is designed while maximizing an uncertain output function. On the other hand, Paper III focuses on the infinite horizon online optimal tracking control of known nonlinear continuous-time systems in strict feedback form by using state and output feedback by relaxing the initial admissible controller requirement. Paper IV applies the optimal controller from Paper III to an underactuated helicopter attitude and position tracking problem. In Paper V, the optimal control of nonlinear continuous-time systems in strict feedback form from Paper III is revisited by using state and output feedback when the internal dynamics are unknown. Closed-loop stability is demonstrated for all the controller designs developed in this dissertation by using Lyapunov analysis --Abstract, page iv

    Event sampled optimal adaptive regulation of linear and a class of nonlinear systems

    Get PDF
    In networked control systems (NCS), wherein a communication network is used to close the feedback loop, the transmission of feedback signals and execution of the controller is currently carried out at periodic sampling instants. Thus, this scheme requires a significant computational power and network bandwidth. In contrast, the event-based aperiodic sampling and control, which is introduced recently, appears to relieve the computational burden and high network resource utilization. Therefore, in this dissertation, a suite of novel event sampled adaptive regulation schemes in both discrete and continuous time domain for uncertain linear and nonlinear systems are designed. Event sampled Q-learning and adaptive/neuro dynamic programming (ADP) schemes without value and policy iterations are utilized for the linear and nonlinear systems, respectively, in both the time domains. Neural networks (NN) are employed as approximators for nonlinear systems and, hence, the universal approximation property of NN in the event-sampled framework is introduced. The tuning of the parameters and the NN weights are carried out in an aperiodic manner at the event sampled instants leading to a further saving in computation when compared to traditional NN based control. The adaptive regulator when applied on a linear NCS with time-varying network delays and packet losses shows a 30% and 56% reduction in computation and network bandwidth usage, respectively. In case of nonlinear NCS with event sampled ADP based regulator, a reduction of 27% and 66% is observed when compared to periodic sampled schemes. The sampling and transmission instants are determined through adaptive event sampling conditions derived using Lyapunov technique by viewing the closed-loop event sampled linear and nonlinear systems as switched and/or impulsive dynamical systems. --Abstract, page iii

    Decentralized adaptive neural network control of interconnected nonlinear dynamical systems with application to power system

    Get PDF
    Traditional nonlinear techniques cannot be directly applicable to control large scale interconnected nonlinear dynamic systems due their sheer size and unavailability of system dynamics. Therefore, in this dissertation, the decentralized adaptive neural network (NN) control of a class of nonlinear interconnected dynamic systems is introduced and its application to power systems is presented in the form of six papers. In the first paper, a new nonlinear dynamical representation in the form of a large scale interconnected system for a power network free of algebraic equations with multiple UPFCs as nonlinear controllers is presented. Then, oscillation damping for UPFCs using adaptive NN control is discussed by assuming that the system dynamics are known. Subsequently, the dynamic surface control (DSC) framework is proposed in continuous-time not only to overcome the need for the subsystem dynamics and interconnection terms, but also to relax the explosion of complexity problem normally observed in traditional backstepping. The application of DSC-based decentralized control of power system with excitation control is shown in the third paper. On the other hand, a novel adaptive NN-based decentralized controller for a class of interconnected discrete-time systems with unknown subsystem and interconnection dynamics is introduced since discrete-time is preferred for implementation. The application of the decentralized controller is shown on a power network. Next, a near optimal decentralized discrete-time controller is introduced in the fifth paper for such systems in affine form whereas the sixth paper proposes a method for obtaining the L2-gain near optimal control while keeping a tradeoff between accuracy and computational complexity. Lyapunov theory is employed to assess the stability of the controllers --Abstract, page iv

    Event-triggered near optimal adaptive control of interconnected systems

    Get PDF
    Increased interest in complex interconnected systems like smart-grid, cyber manufacturing have attracted researchers to develop optimal adaptive control schemes to elicit a desired performance when the complex system dynamics are uncertain. In this dissertation, motivated by the fact that aperiodic event sampling saves network resources while ensuring system stability, a suite of novel event-sampled distributed near-optimal adaptive control schemes are introduced for uncertain linear and affine nonlinear interconnected systems in a forward-in-time and online manner. First, a novel stochastic hybrid Q-learning scheme is proposed to generate optimal adaptive control law and to accelerate the learning process in the presence of random delays and packet losses resulting from the communication network for an uncertain linear interconnected system. Subsequently, a novel online reinforcement learning (RL) approach is proposed to solve the Hamilton-Jacobi-Bellman (HJB) equation by using neural networks (NNs) for generating distributed optimal control of nonlinear interconnected systems using state and output feedback. To relax the state vector measurements, distributed observers are introduced. Next, using RL, an improved NN learning rule is derived to solve the HJB equation for uncertain nonlinear interconnected systems with event-triggered feedback. Distributed NN identifiers are introduced both for approximating the uncertain nonlinear dynamics and to serve as a model for online exploration. Next, the control policy and the event-sampling errors are considered as non-cooperative players and a min-max optimization problem is formulated for linear and affine nonlinear systems by using zero-sum game approach for simultaneous optimization of both the control policy and the event based sampling instants. The net result is the development of optimal adaptive event-triggered control of uncertain dynamic systems --Abstract, page iv

    Online optimal and adaptive integral tracking control for varying discrete‐time systems using reinforcement learning

    Get PDF
    Conventional closed‐form solution to the optimal control problem using optimal control theory is only available under the assumption that there are known system dynamics/models described as differential equations. Without such models, reinforcement learning (RL) as a candidate technique has been successfully applied to iteratively solve the optimal control problem for unknown or varying systems. For the optimal tracking control problem, existing RL techniques in the literature assume either the use of a predetermined feedforward input for the tracking control, restrictive assumptions on the reference model dynamics, or discounted tracking costs. Furthermore, by using discounted tracking costs, zero steady‐state error cannot be guaranteed by the existing RL methods. This article therefore presents an optimal online RL tracking control framework for discrete‐time (DT) systems, which does not impose any restrictive assumptions of the existing methods and equally guarantees zero steady‐state tracking error. This is achieved by augmenting the original system dynamics with the integral of the error between the reference inputs and the tracked outputs for use in the online RL framework. It is further shown that the resulting value function for the DT linear quadratic tracker using the augmented formulation with integral control is also quadratic. This enables the development of Bellman equations, which use only the system measurements to solve the corresponding DT algebraic Riccati equation and obtain the optimal tracking control inputs online. Two RL strategies are thereafter proposed based on both the value function approximation and the Q‐learning along with bounds on excitation for the convergence of the parameter estimates. Simulation case studies show the effectiveness of the proposed approach
    corecore