64,881 research outputs found
Control of Networked Multiagent Systems with Uncertain Graph Topologies
Multiagent systems consist of agents that locally exchange information
through a physical network subject to a graph topology. Current control methods
for networked multiagent systems assume the knowledge of graph topologies in
order to design distributed control laws for achieving desired global system
behaviors. However, this assumption may not be valid for situations where graph
topologies are subject to uncertainties either due to changes in the physical
network or the presence of modeling errors especially for multiagent systems
involving a large number of interacting agents. Motivating from this
standpoint, this paper studies distributed control of networked multiagent
systems with uncertain graph topologies. The proposed framework involves a
controller architecture that has an ability to adapt its feed- back gains in
response to system variations. Specifically, we analytically show that the
proposed controller drives the trajectories of a networked multiagent system
subject to a graph topology with time-varying uncertainties to a close
neighborhood of the trajectories of a given reference model having a desired
graph topology. As a special case, we also show that a networked multi-agent
system subject to a graph topology with constant uncertainties asymptotically
converges to the trajectories of a given reference model. Although the main
result of this paper is presented in the context of average consensus problem,
the proposed framework can be used for many other problems related to networked
multiagent systems with uncertain graph topologies.Comment: 14 pages, 2 figure
A Survey and Critique of Multiagent Deep Reinforcement Learning
Deep reinforcement learning (RL) has achieved outstanding results in recent
years. This has led to a dramatic increase in the number of applications and
methods. Recent works have explored learning beyond single-agent scenarios and
have considered multiagent learning (MAL) scenarios. Initial results report
successes in complex multiagent domains, although there are several challenges
to be addressed. The primary goal of this article is to provide a clear
overview of current multiagent deep reinforcement learning (MDRL) literature.
Additionally, we complement the overview with a broader analysis: (i) we
revisit previous key components, originally presented in MAL and RL, and
highlight how they have been adapted to multiagent deep reinforcement learning
settings. (ii) We provide general guidelines to new practitioners in the area:
describing lessons learned from MDRL works, pointing to recent benchmarks, and
outlining open avenues of research. (iii) We take a more critical tone raising
practical challenges of MDRL (e.g., implementation and computational demands).
We expect this article will help unify and motivate future research to take
advantage of the abundant literature that exists (e.g., RL and MAL) in a joint
effort to promote fruitful research in the multiagent community.Comment: Under review since Oct 2018. Earlier versions of this work had the
title: "Is multiagent deep reinforcement learning the answer or the question?
A brief survey
Adaptive multiagent system for seismic emergency management
Presently, most multiagent frameworks are typically programmed in Java. Since the JADE platform has been recently ported to .NET, we used it to create an adaptive multiagent system where the knowledge base of the agents is managed using the CLIPS language, also called from .NET. The multiagent system is applied to create seismic risk scenarios, simulations of emergency situations, in which different parties, modeled as adaptive agents, interact and cooperate.adaptive systems, risk management, seisms.
Multiagent cooperation for solving global optimization problems: an extendible framework with example cooperation strategies
This paper proposes the use of multiagent cooperation for solving global optimization problems through the introduction of a new multiagent environment, MANGO. The strength of the environment lays in itsflexible structure based on communicating software agents that attempt to solve a problem cooperatively. This structure allows the execution of a wide range of global optimization algorithms described as a set of interacting operations. At one extreme, MANGO welcomes an individual non-cooperating agent, which is basically the traditional way of solving a global optimization problem. At the other extreme, autonomous agents existing in the environment cooperate as they see fit during run time. We explain the development and communication tools provided in the environment as well as examples of agent realizations and cooperation scenarios. We also show how the multiagent structure is more effective than having a single nonlinear optimization algorithm with randomly selected initial points
Coupled Replicator Equations for the Dynamics of Learning in Multiagent Systems
Starting with a group of reinforcement-learning agents we derive coupled
replicator equations that describe the dynamics of collective learning in
multiagent systems. We show that, although agents model their environment in a
self-interested way without sharing knowledge, a game dynamics emerges
naturally through environment-mediated interactions. An application to
rock-scissors-paper game interactions shows that the collective learning
dynamics exhibits a diversity of competitive and cooperative behaviors. These
include quasiperiodicity, stable limit cycles, intermittency, and deterministic
chaos--behaviors that should be expected in heterogeneous multiagent systems
described by the general replicator equations we derive.Comment: 4 pages, 3 figures,
http://www.santafe.edu/projects/CompMech/papers/credlmas.html; updated
references, corrected typos, changed conten
- …
