6,114 research outputs found
H2 suboptimal containment control of homogeneous and heterogeneous multi-agent systems
This paper deals with the H2 suboptimal state containment control problem for
homogeneous linear multi-agent systems and the H2 suboptimal output containment
control problem for heterogeneous linear multi-agent systems. For both
problems, given multiple autonomous leaders and a number of followers, we
introduce suitable performance outputs and an associated H2 cost functional,
respectively. The aim is to design a distributed protocol by dynamic output
feedback that achieves state/output containment control while the associated H2
cost is smaller than an a priori given upper bound. To this end, we first show
that the H2 suboptimal state/output containment control problem can be
equivalently transformed into H2 suboptimal control problems for a set of
independent systems. Based on this, design methods are then provided to compute
such distributed dynamic output feedback protocols. Simulation examples are
provided to illustrate the performance of our proposed protocols.Comment: 15 papges, 7 figure
Resilient Autonomous Control of Distributed Multi-agent Systems in Contested Environments
An autonomous and resilient controller is proposed for leader-follower
multi-agent systems under uncertainties and cyber-physical attacks. The leader
is assumed non-autonomous with a nonzero control input, which allows changing
the team behavior or mission in response to environmental changes. A resilient
learning-based control protocol is presented to find optimal solutions to the
synchronization problem in the presence of attacks and system dynamic
uncertainties. An observer-based distributed H_infinity controller is first
designed to prevent propagating the effects of attacks on sensors and actuators
throughout the network, as well as to attenuate the effect of these attacks on
the compromised agent itself. Non-homogeneous game algebraic Riccati equations
are derived to solve the H_infinity optimal synchronization problem and
off-policy reinforcement learning is utilized to learn their solution without
requiring any knowledge of the agent's dynamics. A trust-confidence based
distributed control protocol is then proposed to mitigate attacks that hijack
the entire node and attacks on communication links. A confidence value is
defined for each agent based solely on its local evidence. The proposed
resilient reinforcement learning algorithm employs the confidence value of each
agent to indicate the trustworthiness of its own information and broadcast it
to its neighbors to put weights on the data they receive from it during and
after learning. If the confidence value of an agent is low, it employs a trust
mechanism to identify compromised agents and remove the data it receives from
them from the learning process. Simulation results are provided to show the
effectiveness of the proposed approach
- …