7,666 research outputs found

    Event-triggered near optimal adaptive control of interconnected systems

    Get PDF
    Increased interest in complex interconnected systems like smart-grid, cyber manufacturing have attracted researchers to develop optimal adaptive control schemes to elicit a desired performance when the complex system dynamics are uncertain. In this dissertation, motivated by the fact that aperiodic event sampling saves network resources while ensuring system stability, a suite of novel event-sampled distributed near-optimal adaptive control schemes are introduced for uncertain linear and affine nonlinear interconnected systems in a forward-in-time and online manner. First, a novel stochastic hybrid Q-learning scheme is proposed to generate optimal adaptive control law and to accelerate the learning process in the presence of random delays and packet losses resulting from the communication network for an uncertain linear interconnected system. Subsequently, a novel online reinforcement learning (RL) approach is proposed to solve the Hamilton-Jacobi-Bellman (HJB) equation by using neural networks (NNs) for generating distributed optimal control of nonlinear interconnected systems using state and output feedback. To relax the state vector measurements, distributed observers are introduced. Next, using RL, an improved NN learning rule is derived to solve the HJB equation for uncertain nonlinear interconnected systems with event-triggered feedback. Distributed NN identifiers are introduced both for approximating the uncertain nonlinear dynamics and to serve as a model for online exploration. Next, the control policy and the event-sampling errors are considered as non-cooperative players and a min-max optimization problem is formulated for linear and affine nonlinear systems by using zero-sum game approach for simultaneous optimization of both the control policy and the event based sampling instants. The net result is the development of optimal adaptive event-triggered control of uncertain dynamic systems --Abstract, page iv

    Continual Learning-Based Optimal Output Tracking of Nonlinear Discrete-Time Systems with Constraints: Application to Safe Cargo Transfer

    Get PDF
    This Paper Addresses a Novel Lifelong Learning (LL)-Based Optimal Output Tracking Control of Uncertain Non-Linear Affine Discrete-Time Systems (DT) with State Constraints. First, to Deal with Optimal Tracking and Reduce the Steady State Error, a Novel Augmented System, Including Tracking Error and its Integral Value and Desired Trajectory, is Proposed. to Guarantee Safety, an Asymmetric Barrier Function (BF) is Incorporated into the Utility Function to Keep the Tracking Error in a Safe Region. Then, an Adaptive Neural Network (NN) Observer is Employed to Estimate the State Vector and the Control Input Matrix of the Uncertain Nonlinear System. Next, an NN-Based Actor-Critic Framework is Utilized to Estimate the Optimal Control Input and the Value Function by using the Estimated State Vector and Control Coefficient Matrix. to Achieve LL for a Multitask Environment in Order to Avoid the Catastrophic Forgetting Issue, the Exponential Weight Velocity Attenuation (EWVA) Scheme is Integrated into the Critic Update Law. Finally, the Proposed Tracker is Applied to a Safe Cargo/ Crew Transfer from a Large Cargo Ship to a Lighter Surface Effect Ship (SES) in Severe Sea Conditions

    Adaptive Predictive Control Using Neural Network for a Class of Pure-feedback Systems in Discrete-time

    Get PDF
    10.1109/TNN.2008.2000446IEEE Transactions on Neural Networks1991599-1614ITNN

    From Uncertainty Data to Robust Policies for Temporal Logic Planning

    Full text link
    We consider the problem of synthesizing robust disturbance feedback policies for systems performing complex tasks. We formulate the tasks as linear temporal logic specifications and encode them into an optimization framework via mixed-integer constraints. Both the system dynamics and the specifications are known but affected by uncertainty. The distribution of the uncertainty is unknown, however realizations can be obtained. We introduce a data-driven approach where the constraints are fulfilled for a set of realizations and provide probabilistic generalization guarantees as a function of the number of considered realizations. We use separate chance constraints for the satisfaction of the specification and operational constraints. This allows us to quantify their violation probabilities independently. We compute disturbance feedback policies as solutions of mixed-integer linear or quadratic optimization problems. By using feedback we can exploit information of past realizations and provide feasibility for a wider range of situations compared to static input sequences. We demonstrate the proposed method on two robust motion-planning case studies for autonomous driving
    corecore