280 research outputs found

    Foraging swarms as Nash equilibria of dynamic games

    Get PDF
    Cataloged from PDF version of article.The question of whether foraging swarms can form as a result of a noncooperative game played by individuals is shown here to have an affirmative answer. A dynamic game played by N agents in 1-D motion is introduced and models, for instance, a foraging ant colony. Each agent controls its velocity to minimize its total work done in a finite time interval. The game is shown to have a unique Nash equilibrium under two different foraging location specifications, and both equilibria display many features of a foraging swarm behavior observed in biological swarms. Explicit expressions are derived for pairwise distances between individuals of the swarm, swarm size, and swarm center location during foraging. © 2013 IEEE

    Consensus as a Nash Equilibrium of a Dynamic Game

    Full text link
    Consensus formation in a social network is modeled by a dynamic game of a prescribed duration played by members of the network. Each member independently minimizes a cost function that represents his/her motive. An integral cost function penalizes a member's differences of opinion from the others as well as from his/her own initial opinion, weighted by influence and stubbornness parameters. Each member uses its rate of change of opinion as a control input. This defines a dynamic non-cooperative game that turns out to have a unique Nash equilibrium. Analytic explicit expressions are derived for the opinion trajectory of each member for two representative cases obtained by suitable assumptions on the graph topology of the network. These trajectories are then examined under different assumptions on the relative sizes of the influence and stubbornness parameters that appear in the cost functions.Comment: 7 pages, 9 figure, Pre-print from the Proceedings of the 12th International Conference on Signal Image Technology and Internet-based Systems (SITIS), 201

    Constrained Mean Field Games Equilibria as Fixed Point of Random Lifting of Set-Valued Maps

    Get PDF
    We introduce an abstract framework for the study of general mean field game and mean field control problems. Given a multiagent system, its macroscopic description is provided by a time-depending probability measure, where at every instant of time the measure of a set represents the fraction of (microscopic) agents contained in it. The trajectories available to each of the microscopic agents are affected also by the overall state of the system. By using a suitable concept of random lift of set-valued maps, together with fixed point arguments, we are able to derive properties of the macroscopic description of the system from properties of the set-valued map expressing the admissible trajectories for the microscopical agents. We apply the results in the case in which the admissible trajectories of the agents are the minimizers of a suitable integral functional depending also from the macroscopic evolution of the system. Copyright (C) 2022 The Authors

    A Dynamic Game Model of Collective Choice in Multi-Agent Systems

    Full text link
    Inspired by successful biological collective decision mechanisms such as honey bees searching for a new colony or the collective navigation of fish schools, we consider a mean field games (MFG)-like scenario where a large number of agents have to make a choice among a set of different potential target destinations. Each individual both influences and is influenced by the group's decision, as well as the mean trajectory of all the agents. The model can be interpreted as a stylized version of opinion crystallization in an election for example. The agents' biases are dictated first by their initial spatial position and, in a subsequent generalization of the model, by a combination of initial position and a priori individual preference. The agents have linear dynamics and are coupled through a modified form of quadratic cost. Fixed point based finite population equilibrium conditions are identified and associated existence conditions are established. In general multiple equilibria may exist and the agents need to know all initial conditions to compute them precisely. However, as the number of agents increases sufficiently, we show that 1) the computed fixed point equilibria qualify as epsilon Nash equilibria, 2) agents no longer require all initial conditions to compute the equilibria but rather can do so based on a representative probability distribution of these conditions now viewed as random variables. Numerical results are reported

    Utility Design for Distributed Resource Allocation -- Part I: Characterizing and Optimizing the Exact Price of Anarchy

    Full text link
    Game theory has emerged as a fruitful paradigm for the design of networked multiagent systems. A fundamental component of this approach is the design of agents' utility functions so that their self-interested maximization results in a desirable collective behavior. In this work we focus on a well-studied class of distributed resource allocation problems where each agent is requested to select a subset of resources with the goal of optimizing a given system-level objective. Our core contribution is the development of a novel framework to tightly characterize the worst case performance of any resulting Nash equilibrium (price of anarchy) as a function of the chosen agents' utility functions. Leveraging this result, we identify how to design such utilities so as to optimize the price of anarchy through a tractable linear program. This provides us with a priori performance certificates applicable to any existing learning algorithm capable of driving the system to an equilibrium. Part II of this work specializes these results to submodular and supermodular objectives, discusses the complexity of computing Nash equilibria, and provides multiple illustrations of the theoretical findings.Comment: 15 pages, 5 figure
    corecore