2,505 research outputs found

    Resilient Autonomous Control of Distributed Multi-agent Systems in Contested Environments

    Full text link
    An autonomous and resilient controller is proposed for leader-follower multi-agent systems under uncertainties and cyber-physical attacks. The leader is assumed non-autonomous with a nonzero control input, which allows changing the team behavior or mission in response to environmental changes. A resilient learning-based control protocol is presented to find optimal solutions to the synchronization problem in the presence of attacks and system dynamic uncertainties. An observer-based distributed H_infinity controller is first designed to prevent propagating the effects of attacks on sensors and actuators throughout the network, as well as to attenuate the effect of these attacks on the compromised agent itself. Non-homogeneous game algebraic Riccati equations are derived to solve the H_infinity optimal synchronization problem and off-policy reinforcement learning is utilized to learn their solution without requiring any knowledge of the agent's dynamics. A trust-confidence based distributed control protocol is then proposed to mitigate attacks that hijack the entire node and attacks on communication links. A confidence value is defined for each agent based solely on its local evidence. The proposed resilient reinforcement learning algorithm employs the confidence value of each agent to indicate the trustworthiness of its own information and broadcast it to its neighbors to put weights on the data they receive from it during and after learning. If the confidence value of an agent is low, it employs a trust mechanism to identify compromised agents and remove the data it receives from them from the learning process. Simulation results are provided to show the effectiveness of the proposed approach

    Data Driven Distributed Bipartite Consensus Tracking for Nonlinear Multiagent Systems via Iterative Learning Control

    Get PDF
    This article explores a data-driven distributed bipartite consensus tracking (DBCT) problem for discrete-time multi-agent systems (MASs) with coopetition networks under repeatable operations. To solve this problem, a time-varying linearization model along the iteration axis is first established by using the measurement input and output (I/O) data of agents. Then a data-driven distributed bipartite consensus iterative learning control (DBCILC) algorithm is proposed considering both fixed and switching topologies. Compared with existing bipartite consensus, the main characteristic is to construct the proposed control protocol without requiring any explicit or implicit information of MASs’ mathematical model. The difference from existing iterative learning control (ILC) approaches is that both the cooperative interactions and antagonistic interactions, and time-varying switching topologies are considered. Furthermore, through rigorous theoretical analysis, the proposed DBCILC approach can guarantee the bipartite consensus reducing tracking errors in the limited iteration steps. Moreover, although not all agents can receive information from the virtual leader directly, the proposed distributed scheme can maintain the performance and reduce the costs of communication. The results of three examples further illustrate the correctness, effectiveness, and applicability of the proposed algorithm

    Learning-based Robust Bipartite Consensus Control for a Class of Multiagent Systems

    Get PDF
    This paper studies the robust bipartite consensus problems for heterogeneous nonlinear nonaffine discrete-time multi-agent systems (MASs) with fixed and switching topologies against data dropout and unknown disturbances. At first, the controlled system's virtual linear data model is developed by employing the pseudo partial derivative technique, and a distributed combined measurement error function is established utilizing a signed graph theory. Then, an input gain compensation scheme is formulated to mitigate the effects of data dropout in both feedback and forward channels. Moreover, a data-driven learning-based robust bipartite consensus control (LRBCC) scheme based on a radial basis function neural network observer is proposed to estimate the unknown disturbance, using the online input/output data without requiring any information on the mathematical dynamics. The stability analysis of the proposed LRBCC approach is given. Simulation and hardware testing also illustrate the correctness and effectiveness of the designed method

    Optimized state feedback regulation of 3DOF helicopter system via extremum seeking

    Get PDF
    In this paper, an optimized state feedback regulation of a 3 degree of freedom (DOF) helicopter is designed via extremum seeking (ES) technique. Multi-parameter ES is applied to optimize the tracking performance via tuning State Vector Feedback with Integration of the Control Error (SVFBICE). Discrete multivariable version of ES is developed to minimize a cost function that measures the performance of the controller. The cost function is a function of the error between the actual and desired axis positions. The controller parameters are updated online as the optimization takes place. This method significantly decreases the time in obtaining optimal controller parameters. Simulations were conducted for the online optimization under both fixed and varying operating conditions. The results demonstrate the usefulness of using ES for preserving the maximum attainable performance

    High-Order Leader-Follower Tracking Control under Limited Information Availability

    Full text link
    Limited information availability represents a fundamental challenge for control of multi-agent systems, since an agent often lacks sensing capabilities to measure certain states of its own and can exchange data only with its neighbors. The challenge becomes even greater when agents are governed by high-order dynamics. The present work is motivated to conduct control design for linear and nonlinear high-order leader-follower multi-agent systems in a context where only the first state of an agent is measured. To address this open challenge, we develop novel distributed observers to enable followers to reconstruct unmeasured or unknown quantities about themselves and the leader and on such a basis, build observer-based tracking control approaches. We analyze the convergence properties of the proposed approaches and validate their performance through simulation

    On Iterative Learning in Multi-agent Systems Coordination and Control

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Disturbance Observer-based Robust Control and Its Applications: 35th Anniversary Overview

    Full text link
    Disturbance Observer has been one of the most widely used robust control tools since it was proposed in 1983. This paper introduces the origins of Disturbance Observer and presents a survey of the major results on Disturbance Observer-based robust control in the last thirty-five years. Furthermore, it explains the analysis and synthesis techniques of Disturbance Observer-based robust control for linear and nonlinear systems by using a unified framework. In the last section, this paper presents concluding remarks on Disturbance Observer-based robust control and its engineering applications.Comment: 12 pages, 4 figure
    corecore