2,170 research outputs found

    Optimal Adaptive Tracking Control Of Partially Uncertain Nonlinear Discrete-Time Systems Using Lifelong Hybrid Learning

    Get PDF
    This article addresses a multilayer neural network (MNN)-based optimal adaptive tracking of partially uncertain nonlinear discrete-time (DT) systems in affine form. By employing an actor–critic neural network (NN) to approximate the value function and optimal control policy, the critic NN is updated via a novel hybrid learning scheme, where its weights are adjusted once at a sampling instant and also in a finite iterative manner within the instants to enhance the convergence rate. Moreover, to deal with the persistency of excitation (PE) condition, a replay buffer is incorporated into the critic update law through concurrent learning. To address the vanishing gradient issue, the actor and critic MNN weights are tuned using control input and temporal difference errors (TDEs), respectively. In addition, a weight consolidation scheme is incorporated into the critic MNN update law to attain lifelong learning and overcome catastrophic forgetting, thus lowering the cumulative cost. The tracking error, and the actor and critic weight estimation errors are shown to be bounded using the Lyapunov analysis. Simulation results using the proposed approach on a two-link robot manipulator show a significant reduction in tracking error by 44%44\% and cumulative cost by 31%31\% in a multitask environment

    Hamiltonian-Driven Adaptive Dynamic Programming with Efficient Experience Replay

    Get PDF
    This article presents a novel efficient experience-replay-based adaptive dynamic programming (ADP) for the optimal control problem of a class of nonlinear dynamical systems within the Hamiltonian-driven framework. The quasi-Hamiltonian is presented for the policy evaluation problem with an admissible policy. With the quasi-Hamiltonian, a novel composite critic learning mechanism is developed to combine the instantaneous data with the historical data. In addition, the pseudo-Hamiltonian is defined to deal with the performance optimization problem. Based on the pseudo-Hamiltonian, the conventional Hamilton–Jacobi–Bellman (HJB) equation can be represented in a filtered form, which can be implemented online. Theoretical analysis is investigated in terms of the convergence of the adaptive critic design and the stability of the closed-loop systems, where parameter convergence can be achieved under a weakened excitation condition. Simulation studies are investigated to verify the efficacy of the presented design scheme

    Optimal Adaptive Output Regulation of Uncertain Nonlinear Discrete-Time Systems using Lifelong Concurrent Learning

    Get PDF
    This Paper Addresses Neural Network (NN) based Optimal Adaptive Regulation of Uncertain Nonlinear Discrete-Time Systems in Affine Form using Output Feedback Via Lifelong Concurrent Learning. First, an Adaptive NN Observer is Introduced to Estimate Both the State Vector and Control Coefficient Matrix, and its NN Weights Are Adjusted using Both Output Error and Concurrent Learning Term to Relax the Persistency Excitation (PE) Condition. Next, by Utilizing an Actor-Critic Framework for Estimating the Value Functional and Control Policy, the Critic Network Weights Are Tuned Via Both Temporal Different Error and Concurrent Learning Schemes through a Replay Buffer. the Actor NN Weights Are Tuned using Control Policy Errors. to Attain Lifelong Learning for Performing Effectively during Multiple Tasks, an Elastic Weight Consolidation Term is Added to the Critic NN Weight Tuning Law. the State Estimation, Regulation, and the Weight Estimation Errors of the Observer, Actor and Critic NNs Are Demonstrated to Be Bounded When Performing Tasks by using Lyapunov Analysis. Simulation Results Are Carried Out to Verify the Effectiveness of the Proposed Approach on a Vander Pol Oscillator. Finally, Extension to Optimal Tracking is Given Briefly

    Output-feedback online optimal control for a class of nonlinear systems

    Full text link
    In this paper an output-feedback model-based reinforcement learning (MBRL) method for a class of second-order nonlinear systems is developed. The control technique uses exact model knowledge and integrates a dynamic state estimator within the model-based reinforcement learning framework to achieve output-feedback MBRL. Simulation results demonstrate the efficacy of the developed method

    Intelligent Learning Control System Design Based on Adaptive Dynamic Programming

    Get PDF
    Adaptive dynamic programming (ADP) controller is a powerful neural network based control technique that has been investigated, designed, and tested in a wide range of applications for solving optimal control problems in complex systems. The performance of ADP controller is usually obtained by long training periods because the data usage efficiency is low as it discards the samples once used. Experience replay is a powerful technique showing potential to accelerate the training process of learning and control. However, its existing design can not be directly used for model-free ADP design, because it focuses on the forward temporal difference (TD) information (e.g., state-action pair) between the current time step and the future time step, and will need a model network for future information prediction. Uniform random sampling again used for experience replay, is not an efficient technique to learn. Prioritized experience replay (PER) presents important transitions more frequently and has proven to be efficient in the learning process. In order to solve long training periods of ADP controller, the first goal of this thesis is to avoid the usage of model network or identifier of the system. Specifically, the experience tuple is designed with one step backward state-action information and the TD can be achieved by a previous time step and a current time step. The proposed approach is tested for two case studies: cart-pole and triple-link pendulum balancing tasks. The proposed approach improved the required average trial to succeed by 26.5% for cart-pole and 43% for triple-link. The second goal of this thesis is to integrate the efficient learning capability of PER into ADP. The detailed theoretical analysis is presented in order to verify the stability of the proposed control technique. The proposed approach improved the required average trial to succeed compared to traditional ADP controller by 60.56% for cart-pole and 56.89% for triple-link balancing tasks. The final goal of this thesis is to validate ADP controller in smart grid to improve current control performance of virtual synchronous machine (VSM) at sudden load changes and a single line to ground fault and reduce harmonics in shunt active filters (SAF) during different loading conditions. The ADP controller produced the fastest response time, low overshoot and in general, the best performance in comparison to the traditional current controller. In SAF, ADP controller reduced total harmonic distortion (THD) of the source current by an average of 18.41% compared to a traditional current controller alone
    • …
    corecore