4,982 research outputs found

    Event-triggered near optimal adaptive control of interconnected systems

    Get PDF
    Increased interest in complex interconnected systems like smart-grid, cyber manufacturing have attracted researchers to develop optimal adaptive control schemes to elicit a desired performance when the complex system dynamics are uncertain. In this dissertation, motivated by the fact that aperiodic event sampling saves network resources while ensuring system stability, a suite of novel event-sampled distributed near-optimal adaptive control schemes are introduced for uncertain linear and affine nonlinear interconnected systems in a forward-in-time and online manner. First, a novel stochastic hybrid Q-learning scheme is proposed to generate optimal adaptive control law and to accelerate the learning process in the presence of random delays and packet losses resulting from the communication network for an uncertain linear interconnected system. Subsequently, a novel online reinforcement learning (RL) approach is proposed to solve the Hamilton-Jacobi-Bellman (HJB) equation by using neural networks (NNs) for generating distributed optimal control of nonlinear interconnected systems using state and output feedback. To relax the state vector measurements, distributed observers are introduced. Next, using RL, an improved NN learning rule is derived to solve the HJB equation for uncertain nonlinear interconnected systems with event-triggered feedback. Distributed NN identifiers are introduced both for approximating the uncertain nonlinear dynamics and to serve as a model for online exploration. Next, the control policy and the event-sampling errors are considered as non-cooperative players and a min-max optimization problem is formulated for linear and affine nonlinear systems by using zero-sum game approach for simultaneous optimization of both the control policy and the event based sampling instants. The net result is the development of optimal adaptive event-triggered control of uncertain dynamic systems --Abstract, page iv

    Reinforcement Learning Based Dual-Control Methodology for Complex Nonlinear Discrete-Time Systems with Application to Spark Engine EGR Operation

    Get PDF
    A novel reinforcement-learning-based dual-control methodology adaptive neural network (NN) controller is developed to deliver a desired tracking performance for a class of complex feedback nonlinear discrete-time systems, which consists of a second-order nonlinear discrete-time system in nonstrict feedback form and an affine nonlinear discrete-time system, in the presence of bounded and unknown disturbances. For example, the exhaust gas recirculation (EGR) operation of a spark ignition (SI) engine is modeled by using such a complex nonlinear discrete-time system. A dual-controller approach is undertaken where primary adaptive critic NN controller is designed for the nonstrict feedback nonlinear discrete-time system whereas the secondary one for the affine nonlinear discrete-time system but the controllers together offer the desired performance. The primary adaptive critic NN controller includes an NN observer for estimating the states and output, an NN critic, and two action NNs for generating virtual control and actual control inputs for the nonstrict feedback nonlinear discrete-time system, whereas an additional critic NN and an action NN are included for the affine nonlinear discrete-time system by assuming the state availability. All NN weights adapt online towards minimization of a certain performance index, utilizing gradient-descent-based rule. Using Lyapunov theory, the uniformly ultimate boundedness (UUB) of the closed-loop tracking error, weight estimates, and observer estimates are shown. The adaptive critic NN controller performance is evaluated on an SI engine operating with high EGR levels where the controller objective is to reduce cyclic dispersion in heat release while minimizing fuel intake. Simulation and experimental results indicate that engine out emissions drop significantly at 20% EGR due to reduction in dispersion in heat release thus verifying the dual-control approach

    Adaptive Predictive Control Using Neural Network for a Class of Pure-feedback Systems in Discrete-time

    Get PDF
    10.1109/TNN.2008.2000446IEEE Transactions on Neural Networks1991599-1614ITNN

    Near Optimal Neural Network-Based Output Feedback Control of Affine Nonlinear Discrete-Time Systems

    Get PDF
    In this paper, a novel online reinforcement learning neural network (NN)-based optimal output feedback controller, referred to as adaptive critic controller, is proposed for affine nonlinear discrete-time systems, to deliver a desired tracking performance. The adaptive critic design consist of three entities, an observer to estimate the system states, an action network that produces optimal control input and a critic that evaluates the performance of the action network. The critic is termed adaptive as it adapts itself to output the optimal cost-to-go function which is based on the standard Bellman equation. By using the Lyapunov approach, the uniformly ultimate boundedness (UUB) of the estimation and tracking errors and weight estimates is demonstrated. The effectiveness of the controller is evaluated for the task of nanomanipulation in a simulation environment

    Composite Learning Control With Application to Inverted Pendulums

    Full text link
    Composite adaptive control (CAC) that integrates direct and indirect adaptive control techniques can achieve smaller tracking errors and faster parameter convergence compared with direct and indirect adaptive control techniques. However, the condition of persistent excitation (PE) still has to be satisfied to guarantee parameter convergence in CAC. This paper proposes a novel model reference composite learning control (MRCLC) strategy for a class of affine nonlinear systems with parametric uncertainties to guarantee parameter convergence without the PE condition. In the composite learning, an integral during a moving-time window is utilized to construct a prediction error, a linear filter is applied to alleviate the derivation of plant states, and both the tracking error and the prediction error are applied to update parametric estimates. It is proven that the closed-loop system achieves global exponential-like stability under interval excitation rather than PE of regression functions. The effectiveness of the proposed MRCLC has been verified by the application to an inverted pendulum control problem.Comment: 5 pages, 6 figures, conference submissio
    corecore