339,336 research outputs found

    On stability and controllability of multi-agent linear systems

    Get PDF
    Recent advances in communication and computing have made the control and coordination of dynamic network agents to become an area of multidisciplinary research at the intersection of the theory of control systems, communication and linear algebra. The advances of the research in multi-agent systems are strongly supported by their critical applications in different areas as for example in consensus problem of communication networks, or formation control of mobile robots. Mainly, the consensus problem has been studied from the point of view of stability. Nevertheless, recently some researchers have started to analyze the controllability problems. The study of controllability is motivated by the fact that the architecture of communication network in engineering multi-agent systems is usually adjustable. Therefore, it is meaningful to analyze how to improve the controllability of a multi-agent system. In this work we analyze the stability and controllability of multiagent systems consisting of k + 1 agents with dynamics x¿i = Aixi + Biui, i = 0, 1, . . . , kPostprint (published version

    Stability of Evolving Multi-Agent Systems

    Full text link
    A Multi-Agent System is a distributed system where the agents or nodes perform complex functions that cannot be written down in analytic form. Multi-Agent Systems are highly connected, and the information they contain is mostly stored in the connections. When agents update their state, they take into account the state of the other agents, and they have access to those states via the connections. There is also external, user-generated input into the Multi-Agent System. As so much information is stored in the connections, agents are often memory-less. This memory-less property, together with the randomness of the external input, has allowed us to model Multi-Agent Systems using Markov chains. In this paper, we look at Multi-Agent Systems that evolve, i.e. the number of agents varies according to the fitness of the individual agents. We extend our Markov chain model, and define stability. This is the start of a methodology to control Multi-Agent Systems. We then build upon this to construct an entropy-based definition for the degree of instability (entropy of the limit probabilities), which we used to perform a stability analysis. We then investigated the stability of evolving agent populations through simulation, and show that the results are consistent with the original definition of stability in non-evolving Multi-Agent Systems, proposed by Chli and De Wilde. This paper forms the theoretical basis for the construction of Digital Business Ecosystems, and applications have been reported elsewhere.Comment: 9 pages, 5 figures, journa

    COORDINATION OF LEADER-FOLLOWER MULTI-AGENT SYSTEM WITH TIME-VARYING OBJECTIVE FUNCTION

    Get PDF
    This thesis aims to introduce a new framework for the distributed control of multi-agent systems with adjustable swarm control objectives. Our goal is twofold: 1) to provide an overview to how time-varying objectives in the control of autonomous systems may be applied to the distributed control of multi-agent systems with variable autonomy level, and 2) to introduce a framework to incorporate the proposed concept to fundamental swarm behaviors such as aggregation and leader tracking. Leader-follower multi-agent systems are considered in this study, and a general form of time-dependent artificial potential function is proposed to describe the varying objectives of the system in the case of complete information exchange. Using Lyapunov methods, the stability and boundedness of the agents\u27 trajectories under single order and higher order dynamics are analyzed. Illustrative numerical simulations are presented to demonstrate the validity of our results. Then, we extend these results for multi-agent systems with limited information exchange and switching communication topology. The first steps of the realization of an experimental framework have been made with the ultimate goal of verifying the simulation results in practice

    Identification of Hessian matrix in distributed gradient-based multi-agent coordination control systems

    Get PDF
    Multi-agent coordination control usually involves a potential function that encodes information of a global control task, while the control input for individual agents is often designed by a gradient-based control law. The property of Hessian matrix associated with a potential function plays an important role in the stability analysis of equilibrium points in gradient-based coordination control systems. Therefore, the identification of Hessian matrix in gradient-based multi-agent coordination systems becomes a key step in multi-agent equilibrium analysis. However, very often the identification of Hessian matrix via the entry-wise calculation is a very tedious task and can easily introduce calculation errors. In this paper we present some general and fast approaches for the identification of Hessian matrix based on matrix differentials and calculus rules, which can easily derive a compact form of Hessian matrix for multi-agent coordination systems. We also present several examples on Hessian identification for certain typical potential functions involving edge-tension distance functions and triangular-area functions, and illustrate their applications in the context of distributed coordination and formation control

    Lyapunov-Based Reinforcement Learning for Decentralized Multi-Agent Control

    Full text link
    Decentralized multi-agent control has broad applications, ranging from multi-robot cooperation to distributed sensor networks. In decentralized multi-agent control, systems are complex with unknown or highly uncertain dynamics, where traditional model-based control methods can hardly be applied. Compared with model-based control in control theory, deep reinforcement learning (DRL) is promising to learn the controller/policy from data without the knowing system dynamics. However, to directly apply DRL to decentralized multi-agent control is challenging, as interactions among agents make the learning environment non-stationary. More importantly, the existing multi-agent reinforcement learning (MARL) algorithms cannot ensure the closed-loop stability of a multi-agent system from a control-theoretic perspective, so the learned control polices are highly possible to generate abnormal or dangerous behaviors in real applications. Hence, without stability guarantee, the application of the existing MARL algorithms to real multi-agent systems is of great concern, e.g., UAVs, robots, and power systems, etc. In this paper, we aim to propose a new MARL algorithm for decentralized multi-agent control with a stability guarantee. The new MARL algorithm, termed as a multi-agent soft-actor critic (MASAC), is proposed under the well-known framework of "centralized-training-with-decentralized-execution". The closed-loop stability is guaranteed by the introduction of a stability constraint during the policy improvement in our MASAC algorithm. The stability constraint is designed based on Lyapunov's method in control theory. To demonstrate the effectiveness, we present a multi-agent navigation example to show the efficiency of the proposed MASAC algorithm.Comment: Accepted to The 2nd International Conference on Distributed Artificial Intelligenc
    corecore