583 research outputs found

    Cooperative optimal preview tracking for linear descriptor multi-agent systems

    Get PDF
    © 2018 The Franklin Institute. In this paper, a cooperative optimal preview tracking problem is considered for continuous-time descriptor multi-agent systems with a directed topology containing a spanning tree. By the acyclic assumption and state augmentation technique, it is shown that the cooperative tracking problem is equivalent to local optimal regulation problems of a set of low-dimensional descriptor augmented subsystems. To design distributed optimal preview controllers, restricted system equivalent (r.s.e.) and preview control theory are first exploited to obtain optimal preview controllers for reduced-order normal subsystems. Then, by using the invertibility of restricted equivalent relations, a constructive method for designing distributed controller is presented which also yields an explicit admissible solution for the generalized algebraic Riccati equation. Sufficient conditions for achieving global cooperative preview tracking are proposed proving that the distributed controllers are able to stabilize the descriptor augmented subsystems asymptotically. Finally, the validity of the theoretical results is illustrated via numerical simulation

    Resilient Autonomous Control of Distributed Multi-agent Systems in Contested Environments

    Full text link
    An autonomous and resilient controller is proposed for leader-follower multi-agent systems under uncertainties and cyber-physical attacks. The leader is assumed non-autonomous with a nonzero control input, which allows changing the team behavior or mission in response to environmental changes. A resilient learning-based control protocol is presented to find optimal solutions to the synchronization problem in the presence of attacks and system dynamic uncertainties. An observer-based distributed H_infinity controller is first designed to prevent propagating the effects of attacks on sensors and actuators throughout the network, as well as to attenuate the effect of these attacks on the compromised agent itself. Non-homogeneous game algebraic Riccati equations are derived to solve the H_infinity optimal synchronization problem and off-policy reinforcement learning is utilized to learn their solution without requiring any knowledge of the agent's dynamics. A trust-confidence based distributed control protocol is then proposed to mitigate attacks that hijack the entire node and attacks on communication links. A confidence value is defined for each agent based solely on its local evidence. The proposed resilient reinforcement learning algorithm employs the confidence value of each agent to indicate the trustworthiness of its own information and broadcast it to its neighbors to put weights on the data they receive from it during and after learning. If the confidence value of an agent is low, it employs a trust mechanism to identify compromised agents and remove the data it receives from them from the learning process. Simulation results are provided to show the effectiveness of the proposed approach
    corecore