1,989 research outputs found

    Resilient Autonomous Control of Distributed Multi-agent Systems in Contested Environments

    Full text link
    An autonomous and resilient controller is proposed for leader-follower multi-agent systems under uncertainties and cyber-physical attacks. The leader is assumed non-autonomous with a nonzero control input, which allows changing the team behavior or mission in response to environmental changes. A resilient learning-based control protocol is presented to find optimal solutions to the synchronization problem in the presence of attacks and system dynamic uncertainties. An observer-based distributed H_infinity controller is first designed to prevent propagating the effects of attacks on sensors and actuators throughout the network, as well as to attenuate the effect of these attacks on the compromised agent itself. Non-homogeneous game algebraic Riccati equations are derived to solve the H_infinity optimal synchronization problem and off-policy reinforcement learning is utilized to learn their solution without requiring any knowledge of the agent's dynamics. A trust-confidence based distributed control protocol is then proposed to mitigate attacks that hijack the entire node and attacks on communication links. A confidence value is defined for each agent based solely on its local evidence. The proposed resilient reinforcement learning algorithm employs the confidence value of each agent to indicate the trustworthiness of its own information and broadcast it to its neighbors to put weights on the data they receive from it during and after learning. If the confidence value of an agent is low, it employs a trust mechanism to identify compromised agents and remove the data it receives from them from the learning process. Simulation results are provided to show the effectiveness of the proposed approach

    Distributed Differential Graphical Game for Control of Double-Integrator Multi-Agent Systems with Input Delay

    Full text link
    This paper studies cooperative control of noncooperative double-integrator multi-agent systems (MASs) with input delay on connected directed graphs in the context of a differential graphical game (DGG). In the distributed DGG, each agent seeks a distributed information control policy by optimizing an individual local performance index (PI) of distributed information from its graph neighbors. The local PI, which quadratically penalizes the agent's deviations from cooperative behavior (e.g., the consensus here), is constructed through the use of the graph Laplacian matrix. For DGGs for double-integrator MASs, the existing body of literature lacks the explicit characterization of Nash equilibrium actions and their associated state trajectories with distributed information. To address this issue, we first convert the N-player DGG with m communication links into m coupled optimal control problems (OCPs), which, in turn, convert to the two-point boundary-value problem (TPBVP). We derive the explicit solutions for the TPBV that constitute the explicit distributed information expressions for Nash equilibrium actions and the state trajectories associated with them for the DGG. An illustrative example verifies the explicit solutions of local information to achieve fully distributed consensus.Comment: The revised version is accepted for publication in IEEE Transactions on Control of Network System

    Cooperative optimal preview tracking for linear descriptor multi-agent systems

    Get PDF
    © 2018 The Franklin Institute. In this paper, a cooperative optimal preview tracking problem is considered for continuous-time descriptor multi-agent systems with a directed topology containing a spanning tree. By the acyclic assumption and state augmentation technique, it is shown that the cooperative tracking problem is equivalent to local optimal regulation problems of a set of low-dimensional descriptor augmented subsystems. To design distributed optimal preview controllers, restricted system equivalent (r.s.e.) and preview control theory are first exploited to obtain optimal preview controllers for reduced-order normal subsystems. Then, by using the invertibility of restricted equivalent relations, a constructive method for designing distributed controller is presented which also yields an explicit admissible solution for the generalized algebraic Riccati equation. Sufficient conditions for achieving global cooperative preview tracking are proposed proving that the distributed controllers are able to stabilize the descriptor augmented subsystems asymptotically. Finally, the validity of the theoretical results is illustrated via numerical simulation

    Distributed Linear Quadratic Optimal Control: Compute Locally and Act Globally

    Get PDF
    In this paper we consider the distributed linear quadratic control problem for networks of agents with single integrator dynamics. We first establish a general formulation of the distributed LQ problem and show that the optimal control gain depends on global information on the network. Thus, the optimal protocol can only be computed in a centralized fashion. In order to overcome this drawback, we propose the design of protocols that are computed in a decentralized way. We will write the global cost functional as a sum of local cost functionals, each associated with one of the agents. In order to achieve 'good' performance of the controlled network, each agent then computes its own local gain, using sampled information of its neighboring agents. This decentralized computation will only lead to suboptimal global network behavior. However, we will show that the resulting network will reach consensus. A simulation example is provided to illustrate the performance of the proposed protocol.Comment: 7 pages, 2 figure
    corecore