47,214 research outputs found

    Beyond Reynolds: A Constraint-Driven Approach to Cluster Flocking

    Full text link
    In this paper, we present an original set of flocking rules using an ecologically-inspired paradigm for control of multi-robot systems. We translate these rules into a constraint-driven optimal control problem where the agents minimize energy consumption subject to safety and task constraints. We prove several properties about the feasible space of the optimal control problem and show that velocity consensus is an optimal solution. We also motivate the inclusion of slack variables in constraint-driven problems when the global state is only partially observable by each agent. Finally, we analyze the case where the communication topology is fixed and connected, and prove that our proposed flocking rules achieve velocity consensus.Comment: 6 page

    Resilient Autonomous Control of Distributed Multi-agent Systems in Contested Environments

    Full text link
    An autonomous and resilient controller is proposed for leader-follower multi-agent systems under uncertainties and cyber-physical attacks. The leader is assumed non-autonomous with a nonzero control input, which allows changing the team behavior or mission in response to environmental changes. A resilient learning-based control protocol is presented to find optimal solutions to the synchronization problem in the presence of attacks and system dynamic uncertainties. An observer-based distributed H_infinity controller is first designed to prevent propagating the effects of attacks on sensors and actuators throughout the network, as well as to attenuate the effect of these attacks on the compromised agent itself. Non-homogeneous game algebraic Riccati equations are derived to solve the H_infinity optimal synchronization problem and off-policy reinforcement learning is utilized to learn their solution without requiring any knowledge of the agent's dynamics. A trust-confidence based distributed control protocol is then proposed to mitigate attacks that hijack the entire node and attacks on communication links. A confidence value is defined for each agent based solely on its local evidence. The proposed resilient reinforcement learning algorithm employs the confidence value of each agent to indicate the trustworthiness of its own information and broadcast it to its neighbors to put weights on the data they receive from it during and after learning. If the confidence value of an agent is low, it employs a trust mechanism to identify compromised agents and remove the data it receives from them from the learning process. Simulation results are provided to show the effectiveness of the proposed approach

    Deep Reinforcement Learning for Swarm Systems

    Full text link
    Recently, deep reinforcement learning (RL) methods have been applied successfully to multi-agent scenarios. Typically, these methods rely on a concatenation of agent states to represent the information content required for decentralized decision making. However, concatenation scales poorly to swarm systems with a large number of homogeneous agents as it does not exploit the fundamental properties inherent to these systems: (i) the agents in the swarm are interchangeable and (ii) the exact number of agents in the swarm is irrelevant. Therefore, we propose a new state representation for deep multi-agent RL based on mean embeddings of distributions. We treat the agents as samples of a distribution and use the empirical mean embedding as input for a decentralized policy. We define different feature spaces of the mean embedding using histograms, radial basis functions and a neural network learned end-to-end. We evaluate the representation on two well known problems from the swarm literature (rendezvous and pursuit evasion), in a globally and locally observable setup. For the local setup we furthermore introduce simple communication protocols. Of all approaches, the mean embedding representation using neural network features enables the richest information exchange between neighboring agents facilitating the development of more complex collective strategies.Comment: 31 pages, 12 figures, version 3 (published in JMLR Volume 20

    Local Communication Protocols for Learning Complex Swarm Behaviors with Deep Reinforcement Learning

    Full text link
    Swarm systems constitute a challenging problem for reinforcement learning (RL) as the algorithm needs to learn decentralized control policies that can cope with limited local sensing and communication abilities of the agents. While it is often difficult to directly define the behavior of the agents, simple communication protocols can be defined more easily using prior knowledge about the given task. In this paper, we propose a number of simple communication protocols that can be exploited by deep reinforcement learning to find decentralized control policies in a multi-robot swarm environment. The protocols are based on histograms that encode the local neighborhood relations of the agents and can also transmit task-specific information, such as the shortest distance and direction to a desired target. In our framework, we use an adaptation of Trust Region Policy Optimization to learn complex collaborative tasks, such as formation building and building a communication link. We evaluate our findings in a simulated 2D-physics environment, and compare the implications of different communication protocols.Comment: 13 pages, 4 figures, version 2, accepted at ANTS 201

    Can geocomputation save urban simulation? Throw some agents into the mixture, simmer and wait ...

    Get PDF
    There are indications that the current generation of simulation models in practical, operational uses has reached the limits of its usefulness under existing specifications. The relative stasis in operational urban modeling contrasts with simulation efforts in other disciplines, where techniques, theories, and ideas drawn from computation and complexity studies are revitalizing the ways in which we conceptualize, understand, and model real-world phenomena. Many of these concepts and methodologies are applicable to operational urban systems simulation. Indeed, in many cases, ideas from computation and complexity studies—often clustered under the collective term of geocomputation, as they apply to geography—are ideally suited to the simulation of urban dynamics. However, there exist several obstructions to their successful use in operational urban geographic simulation, particularly as regards the capacity of these methodologies to handle top-down dynamics in urban systems. This paper presents a framework for developing a hybrid model for urban geographic simulation and discusses some of the imposing barriers against innovation in this field. The framework infuses approaches derived from geocomputation and complexity with standard techniques that have been tried and tested in operational land-use and transport simulation. Macro-scale dynamics that operate from the topdown are handled by traditional land-use and transport models, while micro-scale dynamics that work from the bottom-up are delegated to agent-based models and cellular automata. The two methodologies are fused in a modular fashion using a system of feedback mechanisms. As a proof-of-concept exercise, a micro-model of residential location has been developed with a view to hybridization. The model mixes cellular automata and multi-agent approaches and is formulated so as to interface with meso-models at a higher scale
    • …
    corecore