19,160 research outputs found
Distributed Adaptive Control for a Class of Heterogeneous Nonlinear Multi-Agent Systems with Nonidentical Dimensions
A novel feedback distributed adaptive control strategy based on radial basis neural network (RBFNN) is proposed for the consensus control of a class of leaderless heterogeneous nonlinear multi-agent systems with the same and different dimensions. The distributed control, which consists of a sequence of comparable matrices or vectors, can make that all the states of each agent to attain consensus dynamic behaviors are defined with similar parameters of each agent with nonidentical dimensions. The coupling weight adaptation laws and the feedback management of neural network weights ensure that all signals in the closed-loop system are uniformly ultimately bounded. Finally, two simulation examples are carried out to validate the effectiveness of the suggested control design strategy
Learning-based Robust Bipartite Consensus Control for a Class of Multiagent Systems
This paper studies the robust bipartite consensus problems for heterogeneous nonlinear nonaffine discrete-time multi-agent systems (MASs) with fixed and switching topologies against data dropout and unknown disturbances. At first, the controlled system's virtual linear data model is developed by employing the pseudo partial derivative technique, and a distributed combined measurement error function is established utilizing a signed graph theory. Then, an input gain compensation scheme is formulated to mitigate the effects of data dropout in both feedback and forward channels. Moreover, a data-driven learning-based robust bipartite consensus control (LRBCC) scheme based on a radial basis function neural network observer is proposed to estimate the unknown disturbance, using the online input/output data without requiring any information on the mathematical dynamics. The stability analysis of the proposed LRBCC approach is given. Simulation and hardware testing also illustrate the correctness and effectiveness of the designed method
Resilient Autonomous Control of Distributed Multi-agent Systems in Contested Environments
An autonomous and resilient controller is proposed for leader-follower
multi-agent systems under uncertainties and cyber-physical attacks. The leader
is assumed non-autonomous with a nonzero control input, which allows changing
the team behavior or mission in response to environmental changes. A resilient
learning-based control protocol is presented to find optimal solutions to the
synchronization problem in the presence of attacks and system dynamic
uncertainties. An observer-based distributed H_infinity controller is first
designed to prevent propagating the effects of attacks on sensors and actuators
throughout the network, as well as to attenuate the effect of these attacks on
the compromised agent itself. Non-homogeneous game algebraic Riccati equations
are derived to solve the H_infinity optimal synchronization problem and
off-policy reinforcement learning is utilized to learn their solution without
requiring any knowledge of the agent's dynamics. A trust-confidence based
distributed control protocol is then proposed to mitigate attacks that hijack
the entire node and attacks on communication links. A confidence value is
defined for each agent based solely on its local evidence. The proposed
resilient reinforcement learning algorithm employs the confidence value of each
agent to indicate the trustworthiness of its own information and broadcast it
to its neighbors to put weights on the data they receive from it during and
after learning. If the confidence value of an agent is low, it employs a trust
mechanism to identify compromised agents and remove the data it receives from
them from the learning process. Simulation results are provided to show the
effectiveness of the proposed approach
- ā¦