1,648 research outputs found

    A Survey and Analysis of Cooperative Multi-Agent Robot Systems: Challenges and Directions

    Get PDF
    Research in the area of cooperative multi-agent robot systems has received wide attention among researchers in recent years. The main concern is to find the effective coordination among autonomous agents to perform the task in order to achieve a high quality of overall performance. Therefore, this paper reviewed various selected literatures primarily from recent conference proceedings and journals related to cooperation and coordination of multi-agent robot systems (MARS). The problems, issues, and directions of MARS research have been investigated in the literature reviews. Three main elements of MARS which are the type of agents, control architectures, and communications were discussed thoroughly in the beginning of this paper. A series of problems together with the issues were analyzed and reviewed, which included centralized and decentralized control, consensus, containment, formation, task allocation, intelligences, optimization and communications of multi-agent robots. Since the research in the field of multi-agent robot research is expanding, some issues and future challenges in MARS are recalled, discussed and clarified with future directions. Finally, the paper is concluded with some recommendations with respect to multi-agent systems

    Cooperative optimal preview tracking for linear descriptor multi-agent systems

    Get PDF
    © 2018 The Franklin Institute. In this paper, a cooperative optimal preview tracking problem is considered for continuous-time descriptor multi-agent systems with a directed topology containing a spanning tree. By the acyclic assumption and state augmentation technique, it is shown that the cooperative tracking problem is equivalent to local optimal regulation problems of a set of low-dimensional descriptor augmented subsystems. To design distributed optimal preview controllers, restricted system equivalent (r.s.e.) and preview control theory are first exploited to obtain optimal preview controllers for reduced-order normal subsystems. Then, by using the invertibility of restricted equivalent relations, a constructive method for designing distributed controller is presented which also yields an explicit admissible solution for the generalized algebraic Riccati equation. Sufficient conditions for achieving global cooperative preview tracking are proposed proving that the distributed controllers are able to stabilize the descriptor augmented subsystems asymptotically. Finally, the validity of the theoretical results is illustrated via numerical simulation

    Cooperative global optimal preview tracking control of linear multi-agent systems: an internal model approach

    Get PDF
    © 2017 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group. This paper investigates the cooperative global optimal preview tracking problem of linear multi-agent systems under the assumption that the output of a leader is a previewable periodic signal and the topology graph contains a directed spanning tree. First, a type of distributed internal model is introduced, and the cooperative preview tracking problem is converted to a global optimal regulation problem of an augmented system. Second, an optimal controller, which can guarantee the asymptotic stability of the augmented system, is obtained by means of the standard linear quadratic optimal preview control theory. Third, on the basis of proving the existence conditions of the controller, sufficient conditions are given for the original problem to be solvable, meanwhile a cooperative global optimal controller with error integral and preview compensation is derived. Finally, the validity of theoretical results is demonstrated by a numerical simulation

    Resilient Autonomous Control of Distributed Multi-agent Systems in Contested Environments

    Full text link
    An autonomous and resilient controller is proposed for leader-follower multi-agent systems under uncertainties and cyber-physical attacks. The leader is assumed non-autonomous with a nonzero control input, which allows changing the team behavior or mission in response to environmental changes. A resilient learning-based control protocol is presented to find optimal solutions to the synchronization problem in the presence of attacks and system dynamic uncertainties. An observer-based distributed H_infinity controller is first designed to prevent propagating the effects of attacks on sensors and actuators throughout the network, as well as to attenuate the effect of these attacks on the compromised agent itself. Non-homogeneous game algebraic Riccati equations are derived to solve the H_infinity optimal synchronization problem and off-policy reinforcement learning is utilized to learn their solution without requiring any knowledge of the agent's dynamics. A trust-confidence based distributed control protocol is then proposed to mitigate attacks that hijack the entire node and attacks on communication links. A confidence value is defined for each agent based solely on its local evidence. The proposed resilient reinforcement learning algorithm employs the confidence value of each agent to indicate the trustworthiness of its own information and broadcast it to its neighbors to put weights on the data they receive from it during and after learning. If the confidence value of an agent is low, it employs a trust mechanism to identify compromised agents and remove the data it receives from them from the learning process. Simulation results are provided to show the effectiveness of the proposed approach
    corecore