7,459 research outputs found

    Coupled Replicator Equations for the Dynamics of Learning in Multiagent Systems

    Full text link
    Starting with a group of reinforcement-learning agents we derive coupled replicator equations that describe the dynamics of collective learning in multiagent systems. We show that, although agents model their environment in a self-interested way without sharing knowledge, a game dynamics emerges naturally through environment-mediated interactions. An application to rock-scissors-paper game interactions shows that the collective learning dynamics exhibits a diversity of competitive and cooperative behaviors. These include quasiperiodicity, stable limit cycles, intermittency, and deterministic chaos--behaviors that should be expected in heterogeneous multiagent systems described by the general replicator equations we derive.Comment: 4 pages, 3 figures, http://www.santafe.edu/projects/CompMech/papers/credlmas.html; updated references, corrected typos, changed conten

    A Study of AI Population Dynamics with Million-agent Reinforcement Learning

    Get PDF
    We conduct an empirical study on discovering the ordered collective dynamics obtained by a population of intelligence agents, driven by million-agent reinforcement learning. Our intention is to put intelligent agents into a simulated natural context and verify if the principles developed in the real world could also be used in understanding an artificially-created intelligent population. To achieve this, we simulate a large-scale predator-prey world, where the laws of the world are designed by only the findings or logical equivalence that have been discovered in nature. We endow the agents with the intelligence based on deep reinforcement learning (DRL). In order to scale the population size up to millions agents, a large-scale DRL training platform with redesigned experience buffer is proposed. Our results show that the population dynamics of AI agents, driven only by each agent's individual self-interest, reveals an ordered pattern that is similar to the Lotka-Volterra model studied in population biology. We further discover the emergent behaviors of collective adaptations in studying how the agents' grouping behaviors will change with the environmental resources. Both of the two findings could be explained by the self-organization theory in nature.Comment: Full version of the paper presented at AAMAS 2018 (International Conference on Autonomous Agents and Multiagent Systems

    Bounded Distributed Flocking Control of Nonholonomic Mobile Robots

    Full text link
    There have been numerous studies on the problem of flocking control for multiagent systems whose simplified models are presented in terms of point-mass elements. Meanwhile, full dynamic models pose some challenging problems in addressing the flocking control problem of mobile robots due to their nonholonomic dynamic properties. Taking practical constraints into consideration, we propose a novel approach to distributed flocking control of nonholonomic mobile robots by bounded feedback. The flocking control objectives consist of velocity consensus, collision avoidance, and cohesion maintenance among mobile robots. A flocking control protocol which is based on the information of neighbor mobile robots is constructed. The theoretical analysis is conducted with the help of a Lyapunov-like function and graph theory. Simulation results are shown to demonstrate the efficacy of the proposed distributed flocking control scheme

    Soft behaviour modelling of user communities

    Get PDF
    A soft modelling approach for describing behaviour in on-line user communities is introduced in this work. Behaviour models of individual users in dynamic virtual environments have been described in the literature in terms of timed transition automata; they have various drawbacks. Soft multi/agent behaviour automata are defined and proposed to describe multiple user behaviours and to recognise larger classes of user group histories, such as group histories which contain unexpected behaviours. The notion of deviation from the user community model allows defining a soft parsing process which assesses and evaluates the dynamic behaviour of a group of users interacting in virtual environments, such as e-learning and e-business platforms. The soft automaton model can describe virtually infinite sequences of actions due to multiple users and subject to temporal constraints. Soft measures assess a form of distance of observed behaviours by evaluating the amount of temporal deviation, additional or omitted actions contained in an observed history as well as actions performed by unexpected users. The proposed model allows the soft recognition of user group histories also when the observed actions only partially meet the given behaviour model constraints. This approach is more realistic for real-time user community support systems, concerning standard boolean model recognition, when more than one user model is potentially available, and the extent of deviation from community behaviour models can be used as a guide to generate the system support by anticipation, projection and other known techniques. Experiments based on logs from an e-learning platform and plan compilation of the soft multi-agent behaviour automaton show the expressiveness of the proposed model

    Task-Based Information Compression for Multi-Agent Communication Problems with Channel Rate Constraints

    Get PDF
    A collaborative task is assigned to a multiagent system (MAS) in which agents are allowed to communicate. The MAS runs over an underlying Markov decision process and its task is to maximize the averaged sum of discounted one-stage rewards. Although knowing the global state of the environment is necessary for the optimal action selection of the MAS, agents are limited to individual observations. The inter-agent communication can tackle the issue of local observability, however, the limited rate of the inter-agent communication prevents the agent from acquiring the precise global state information. To overcome this challenge, agents need to communicate their observations in a compact way such that the MAS compromises the minimum possible sum of rewards. We show that this problem is equivalent to a form of rate-distortion problem which we call the task-based information compression. We introduce a scheme for task-based information compression titled State aggregation for information compression (SAIC), for which a state aggregation algorithm is analytically designed. The SAIC is shown to be capable of achieving near-optimal performance in terms of the achieved sum of discounted rewards. The proposed algorithm is applied to a rendezvous problem and its performance is compared with several benchmarks. Numerical experiments confirm the superiority of the proposed algorithm.Comment: 13 pages, 9 figure

    Multiagent Bidirectionally-Coordinated Nets: Emergence of Human-level Coordination in Learning to Play StarCraft Combat Games

    Get PDF
    Many artificial intelligence (AI) applications often require multiple intelligent agents to work in a collaborative effort. Efficient learning for intra-agent communication and coordination is an indispensable step towards general AI. In this paper, we take StarCraft combat game as a case study, where the task is to coordinate multiple agents as a team to defeat their enemies. To maintain a scalable yet effective communication protocol, we introduce a Multiagent Bidirectionally-Coordinated Network (BiCNet ['bIknet]) with a vectorised extension of actor-critic formulation. We show that BiCNet can handle different types of combats with arbitrary numbers of AI agents for both sides. Our analysis demonstrates that without any supervisions such as human demonstrations or labelled data, BiCNet could learn various types of advanced coordination strategies that have been commonly used by experienced game players. In our experiments, we evaluate our approach against multiple baselines under different scenarios; it shows state-of-the-art performance, and possesses potential values for large-scale real-world applications.Comment: 10 pages, 10 figures. Previously as title: "Multiagent Bidirectionally-Coordinated Nets for Learning to Play StarCraft Combat Games", Mar 201
    corecore