7 research outputs found

    Distributed Adaptive Control for Nonlinear Heterogeneous Multi-agent Systems with Different Dimensions and Time Delay

    Get PDF
    A distributed neural network adaptive feedback control system is designed for a class of nonlinear multi-agent systems with time delay and nonidentical dimensions. In contrast to previous works on nonlinear heterogeneous multi-agent with the same dimension, particular features are proposed for each agent with different dimensions, and similar parameters are defined, which will be combined parameters of the controller. Second, a novel distributed control based on similarity parameters is proposed using linear matrix inequality (LMI) and Lyapunov stability theory, establishing that all signals in a closed loop system are eventually ultimately bounded. The consistency tracking error steadily decreases to a field with a small number of zeros. Finally, simulated examples with different time delays are utilized to test the effectiveness of the proposed control technique

    Resilient Autonomous Control of Distributed Multi-agent Systems in Contested Environments

    Full text link
    An autonomous and resilient controller is proposed for leader-follower multi-agent systems under uncertainties and cyber-physical attacks. The leader is assumed non-autonomous with a nonzero control input, which allows changing the team behavior or mission in response to environmental changes. A resilient learning-based control protocol is presented to find optimal solutions to the synchronization problem in the presence of attacks and system dynamic uncertainties. An observer-based distributed H_infinity controller is first designed to prevent propagating the effects of attacks on sensors and actuators throughout the network, as well as to attenuate the effect of these attacks on the compromised agent itself. Non-homogeneous game algebraic Riccati equations are derived to solve the H_infinity optimal synchronization problem and off-policy reinforcement learning is utilized to learn their solution without requiring any knowledge of the agent's dynamics. A trust-confidence based distributed control protocol is then proposed to mitigate attacks that hijack the entire node and attacks on communication links. A confidence value is defined for each agent based solely on its local evidence. The proposed resilient reinforcement learning algorithm employs the confidence value of each agent to indicate the trustworthiness of its own information and broadcast it to its neighbors to put weights on the data they receive from it during and after learning. If the confidence value of an agent is low, it employs a trust mechanism to identify compromised agents and remove the data it receives from them from the learning process. Simulation results are provided to show the effectiveness of the proposed approach

    Decentralized Optimal Control With Application In Power System

    Get PDF
    An output-feedback decentralized optimal controller is proposed for power systems with renewable energy penetration. Renewable energy source is modeled similar to the classical generator model and is equipped with the unified power flow controller (UPFC). The transient performance of power system is considered and stability of the dynamical states are investigated. An offline decentralized optimal controller is designed that utilizes only the local states. The network comprises conventional synchronous generators as well as renewable sources with inverter equipped with UPFC. Subsequently, the optimal decentralized controller is compared to the initial stabilizing controller used to obtain the optimal controller. An online decentralized optimal controller is designed for discrete-time system. Two neuro networks are utilized to estimate value function and optimal control strategy. Furthermore, a novel observer-based decentralized optimal controller is developed on small scale discrete-time power system. The system is trained followed by least square rules and successive approximation. Simulation results on IEEE 14-, 30-, and 118-bus power system benchmarks shows satisfactory performance of the online decentralized controller. And also, simulation results demonstrate great performance of the observer and the optimal controller compare to the centralized optimal controller

    Event-triggered near optimal adaptive control of interconnected systems

    Get PDF
    Increased interest in complex interconnected systems like smart-grid, cyber manufacturing have attracted researchers to develop optimal adaptive control schemes to elicit a desired performance when the complex system dynamics are uncertain. In this dissertation, motivated by the fact that aperiodic event sampling saves network resources while ensuring system stability, a suite of novel event-sampled distributed near-optimal adaptive control schemes are introduced for uncertain linear and affine nonlinear interconnected systems in a forward-in-time and online manner. First, a novel stochastic hybrid Q-learning scheme is proposed to generate optimal adaptive control law and to accelerate the learning process in the presence of random delays and packet losses resulting from the communication network for an uncertain linear interconnected system. Subsequently, a novel online reinforcement learning (RL) approach is proposed to solve the Hamilton-Jacobi-Bellman (HJB) equation by using neural networks (NNs) for generating distributed optimal control of nonlinear interconnected systems using state and output feedback. To relax the state vector measurements, distributed observers are introduced. Next, using RL, an improved NN learning rule is derived to solve the HJB equation for uncertain nonlinear interconnected systems with event-triggered feedback. Distributed NN identifiers are introduced both for approximating the uncertain nonlinear dynamics and to serve as a model for online exploration. Next, the control policy and the event-sampling errors are considered as non-cooperative players and a min-max optimization problem is formulated for linear and affine nonlinear systems by using zero-sum game approach for simultaneous optimization of both the control policy and the event based sampling instants. The net result is the development of optimal adaptive event-triggered control of uncertain dynamic systems --Abstract, page iv

    IEEE Transactions On Neural Networks And Learning Systems : Vol. 25, No. 2, February 2014

    No full text
    1. What are the differences between bayesian classifiers and mutual-information classifiers? 2. Multikernel least mean square algorithm. 3. Quantum neural network-based EEG Filtering for a brain-computer interface. 4. Multiclass from binary: expanding one-versus-all, one-versus-one and ECOC -Based approaches. 5. Short-term load and wind power forecasting using neural network-based prediction intervals. 6. HRLSim: A High performance spiking neural network simulator for GPGPU Clusters. 7. Sliding-mode control design for nonlinear systems using probablity density function shaping. 8. Nanophotonic reservoir computing with photonic crystal cavities to generate periodic pattern. 9. Efficient probabilistic classification vector machine with incremental basis function selection. 10. Zhang neural network for online solution of time-varying linear matrix inequality aided with an equality conversation. 11. Robust pole assignment for synthesizing feedback control systems using recurren neural networks. 12. Efficient dual approach to distance metric learning. 13. Event-based visual flow. 14. Decentralized stabilization for a class of continuous-time nonlinear interconnected systems using online learning optimal control approach. 15. Novel adaptive strategies for synchronization of linearly coupled neural networks with reaction-diffusion terms. Etc
    corecore