1,503 research outputs found
Consensus analysis of multiagent networks via aggregated and pinning approaches
This is the post-print version of of the Article - Copyright @ 2011 IEEEIn this paper, the consensus problem of multiagent nonlinear directed networks (MNDNs) is discussed in the case that a MNDN does not have a spanning tree to reach the consensus of all nodes. By using the Lie algebra theory, a linear node-and-node pinning method is proposed to achieve a consensus of a MNDN for all nonlinear functions satisfying a given set of conditions. Based on some optimal algorithms, large-size networks are aggregated to small-size ones. Then, by applying the principle minor theory to the small-size networks, a sufficient condition is given to reduce the number of controlled nodes. Finally, simulation results are given to illustrate the effectiveness of the developed criteria.This work was jointly supported by CityU under a research grant (7002355) and GRF funding (CityU 101109)
Differential Inequalities in Multi-Agent Coordination and Opinion Dynamics Modeling
Distributed algorithms of multi-agent coordination have attracted substantial
attention from the research community; the simplest and most thoroughly studied
of them are consensus protocols in the form of differential or difference
equations over general time-varying weighted graphs. These graphs are usually
characterized algebraically by their associated Laplacian matrices. Network
algorithms with similar algebraic graph theoretic structures, called being of
Laplacian-type in this paper, also arise in other related multi-agent control
problems, such as aggregation and containment control, target surrounding,
distributed optimization and modeling of opinion evolution in social groups. In
spite of their similarities, each of such algorithms has often been studied
using separate mathematical techniques. In this paper, a novel approach is
offered, allowing a unified and elegant way to examine many Laplacian-type
algorithms for multi-agent coordination. This approach is based on the analysis
of some differential or difference inequalities that have to be satisfied by
the some "outputs" of the agents (e.g. the distances to the desired set in
aggregation problems). Although such inequalities may have many unbounded
solutions, under natural graphic connectivity conditions all their bounded
solutions converge (and even reach consensus), entailing the convergence of the
corresponding distributed algorithms. In the theory of differential equations
the absence of bounded non-convergent solutions is referred to as the
equation's dichotomy. In this paper, we establish the dichotomy criteria of
Laplacian-type differential and difference inequalities and show that these
criteria enable one to extend a number of recent results, concerned with
Laplacian-type algorithms for multi-agent coordination and modeling opinion
formation in social groups.Comment: accepted to Automatic
Multiagent Systems for 3D Reconstruction Applications
3D models of scenes are used in many areas ranging from cultural heritage to video games. In order to model a scene, there are several techniques. One of the well-known and well-used techniques is image-based reconstruction. An image-based reconstruction starts with data acquisition step and ends with 3D model of the scene. Data are collected from the scene using various ways. The chapter explains how data acquisition step can be handled using a multiagent system. The explanation is provided by literature reviews and a study whose purpose is reconstructing an area in 3D using a multiagent UAV system
Deep Reinforcement Learning for Swarm Systems
Recently, deep reinforcement learning (RL) methods have been applied
successfully to multi-agent scenarios. Typically, these methods rely on a
concatenation of agent states to represent the information content required for
decentralized decision making. However, concatenation scales poorly to swarm
systems with a large number of homogeneous agents as it does not exploit the
fundamental properties inherent to these systems: (i) the agents in the swarm
are interchangeable and (ii) the exact number of agents in the swarm is
irrelevant. Therefore, we propose a new state representation for deep
multi-agent RL based on mean embeddings of distributions. We treat the agents
as samples of a distribution and use the empirical mean embedding as input for
a decentralized policy. We define different feature spaces of the mean
embedding using histograms, radial basis functions and a neural network learned
end-to-end. We evaluate the representation on two well known problems from the
swarm literature (rendezvous and pursuit evasion), in a globally and locally
observable setup. For the local setup we furthermore introduce simple
communication protocols. Of all approaches, the mean embedding representation
using neural network features enables the richest information exchange between
neighboring agents facilitating the development of more complex collective
strategies.Comment: 31 pages, 12 figures, version 3 (published in JMLR Volume 20
- …