282 research outputs found

    Blackwell-Optimal Strategies in Priority Mean-Payoff Games

    Full text link
    We examine perfect information stochastic mean-payoff games - a class of games containing as special sub-classes the usual mean-payoff games and parity games. We show that deterministic memoryless strategies that are optimal for discounted games with state-dependent discount factors close to 1 are optimal for priority mean-payoff games establishing a strong link between these two classes

    Deterministic Equations for Stochastic Spatial Evolutionary Games

    Get PDF
    Spatial evolutionary games model individuals who are distributed in a spatial domain and update their strategies upon playing a normal form game with their neighbors. We derive integro-differential equations as deterministic approximations of the microscopic updating stochastic processes. This generalizes the known mean-field ordinary differential equations and provide a powerful tool to investigate the spatial effects in populations evolution. The deterministic equations allow to identify many interesting features of the evolution of strategy profiles in a population, such as standing and traveling waves, and pattern formation, especially in replicator-type evolutions

    Learning Sparse Graphon Mean Field Games

    Full text link
    Although the field of multi-agent reinforcement learning (MARL) has made considerable progress in the last years, solving systems with a large number of agents remains a hard challenge. Graphon mean field games (GMFGs) enable the scalable analysis of MARL problems that are otherwise intractable. By the mathematical structure of graphons, this approach is limited to dense graphs which are insufficient to describe many real-world networks such as power law graphs. Our paper introduces a novel formulation of GMFGs, called LPGMFGs, which leverages the graph theoretical concept of LpL^p graphons and provides a machine learning tool to efficiently and accurately approximate solutions for sparse network problems. This especially includes power law networks which are empirically observed in various application areas and cannot be captured by standard graphons. We derive theoretical existence and convergence guarantees and give empirical examples that demonstrate the accuracy of our learning approach for systems with many agents. Furthermore, we extend the Online Mirror Descent (OMD) learning algorithm to our setup to accelerate learning speed, empirically show its capabilities, and conduct a theoretical analysis using the novel concept of smoothed step graphons. In general, we provide a scalable, mathematically well-founded machine learning approach to a large class of otherwise intractable problems of great relevance in numerous research fields.Comment: accepted for publication at the International Conference on Artificial Intelligence and Statistics (AISTATS) 2023; code available at: https://github.com/ChrFabian/Learning_sparse_GMFG

    Large Banks and Systemic Risk: Insights from a Mean-Field Game Model

    Full text link
    This paper aims to investigate the impact of large banks on the financial system stability. To achieve this, we employ a linear-quadratic-Gaussian (LQG) mean-field game (MFG) model of an interbank market, which involves one large bank and multiple small banks. Our approach involves utilizing the MFG methodology to derive the optimal trading strategies for each bank, resulting in an equilibrium for the market. Subsequently, we conduct Monte Carlo simulations to explore the role played by the large bank in systemic risk under various scenarios. Our findings indicate that while the major bank, if its size is not too large, can contribute positively to stability, it also has the potential to generate negative spillover effects in the event of default, leading to increased systemic risk. We also discover that as banks become more reliant on the interbank market, the overall system becomes more stable but the probability of a rare systemic failure increases. This risk is further amplified by the presence of a large bank, its size, and the speed of interbank trading. Overall, the results of this study provide important insights into the management of systemic risk

    Many-agent Reinforcement Learning

    Get PDF
    Multi-agent reinforcement learning (RL) solves the problem of how each agent should behave optimally in a stochastic environment in which multiple agents are learning simultaneously. It is an interdisciplinary domain with a long history that lies in the joint area of psychology, control theory, game theory, reinforcement learning, and deep learning. Following the remarkable success of the AlphaGO series in single-agent RL, 2019 was a booming year that witnessed significant advances in multi-agent RL techniques; impressive breakthroughs have been made on developing AIs that outperform humans on many challenging tasks, especially multi-player video games. Nonetheless, one of the key challenges of multi-agent RL techniques is the scalability; it is still non-trivial to design efficient learning algorithms that can solve tasks including far more than two agents (N2N \gg 2), which I name by \emph{many-agent reinforcement learning} (MARL\footnote{I use the world of ``MARL" to denote multi-agent reinforcement learning with a particular focus on the cases of many agents; otherwise, it is denoted as ``Multi-Agent RL" by default.}) problems. In this thesis, I contribute to tackling MARL problems from four aspects. Firstly, I offer a self-contained overview of multi-agent RL techniques from a game-theoretical perspective. This overview fills the research gap that most of the existing work either fails to cover the recent advances since 2010 or does not pay adequate attention to game theory, which I believe is the cornerstone to solving many-agent learning problems. Secondly, I develop a tractable policy evaluation algorithm -- αα\alpha^\alpha-Rank -- in many-agent systems. The critical advantage of αα\alpha^\alpha-Rank is that it can compute the solution concept of α\alpha-Rank tractably in multi-player general-sum games with no need to store the entire pay-off matrix. This is in contrast to classic solution concepts such as Nash equilibrium which is known to be PPADPPAD-hard in even two-player cases. αα\alpha^\alpha-Rank allows us, for the first time, to practically conduct large-scale multi-agent evaluations. Thirdly, I introduce a scalable policy learning algorithm -- mean-field MARL -- in many-agent systems. The mean-field MARL method takes advantage of the mean-field approximation from physics, and it is the first provably convergent algorithm that tries to break the curse of dimensionality for MARL tasks. With the proposed algorithm, I report the first result of solving the Ising model and multi-agent battle games through a MARL approach. Fourthly, I investigate the many-agent learning problem in open-ended meta-games (i.e., the game of a game in the policy space). Specifically, I focus on modelling the behavioural diversity in meta-games, and developing algorithms that guarantee to enlarge diversity during training. The proposed metric based on determinantal point processes serves as the first mathematically rigorous definition for diversity. Importantly, the diversity-aware learning algorithms beat the existing state-of-the-art game solvers in terms of exploitability by a large margin. On top of the algorithmic developments, I also contribute two real-world applications of MARL techniques. Specifically, I demonstrate the great potential of applying MARL to study the emergent population dynamics in nature, and model diverse and realistic interactions in autonomous driving. Both applications embody the prospect that MARL techniques could achieve huge impacts in the real physical world, outside of purely video games

    Multi Agent Reinforcement Learning for smart mobility and traffic scenarios

    Get PDF
    openAutonomous Driving is one of the most fascinating and stimulating field in modern engineering. While some partial autonomous cars already exist in the industrial market, they are far from being completely independent in any situation. One of the things that these vehicles lack the most is the ability of handling traffic scenarios and situations in which interaction with other road users is required. The purpose of this work is that of investigating learning techniques that could be exploited in order to face the challenge described above. The focus will be put on the Multi-Agent Reinforcement Learning (MARL) paradigm, which seems particularly appropriate to address this kind of problem, given its ability to learn and solve complex tasks without any prior knowledge requirement. The MARL paradigm has its roots both in the classical single agent Reinforcement Learning setup, but also in the Game Theory field. For this reason, after an introduction to the problem and some literature review of related works, the project will begin with an introduction to those two topics. After that the MARL paradigm will be analyzed, focusing both on the theoretical aspects and on the algorithmic point of view. The thesis will then proceed with an experimental section, where some of the state-of-the-art MARL algorithms will be adapted to the Autonomous Driving setup and tested, making use of a simulator, called SMARTS, specifically developed for this purpose. To conclude this work the results obtained in simulation will be analyzed and discussed, and also some ideas for future development will be presented.Autonomous Driving is one of the most fascinating and stimulating field in modern engineering. While some partial autonomous cars already exist in the industrial market, they are far from being completely independent in any situation. One of the things that these vehicles lack the most is the ability of handling traffic scenarios and situations in which interaction with other road users is required. The purpose of this work is that of investigating learning techniques that could be exploited in order to face the challenge described above. The focus will be put on the Multi-Agent Reinforcement Learning (MARL) paradigm, which seems particularly appropriate to address this kind of problem, given its ability to learn and solve complex tasks without any prior knowledge requirement. The MARL paradigm has its roots both in the classical single agent Reinforcement Learning setup, but also in the Game Theory field. For this reason, after an introduction to the problem and some literature review of related works, the project will begin with an introduction to those two topics. After that the MARL paradigm will be analyzed, focusing both on the theoretical aspects and on the algorithmic point of view. The thesis will then proceed with an experimental section, where some of the state-of-the-art MARL algorithms will be adapted to the Autonomous Driving setup and tested, making use of a simulator, called SMARTS, specifically developed for this purpose. To conclude this work the results obtained in simulation will be analyzed and discussed, and also some ideas for future development will be presented

    Statistical mechanics of competitive resource allocation using agent-based models

    Get PDF
    Demand outstrips available resources in most situations, which gives rise to competition, interaction and learning. In this article, we review a broad spectrum of multi-agent models of competition (El Farol Bar problem, Minority Game, Kolkata Paise Restaurant problem, Stable marriage problem, Parking space problem and others) and the methods used to understand them analytically. We emphasize the power of concepts and tools from statistical mechanics to understand and explain fully collective phenomena such as phase transitions and long memory, and the mapping between agent heterogeneity and physical disorder. As these methods can be applied to any large-scale model of competitive resource allocation made up of heterogeneous adaptive agent with non-linear interaction, they provide a prospective unifying paradigm for many scientific disciplines

    Game Theory and Femtocell Communications: Making Network Deployment Feasible

    No full text
    9781466600928Femtocell is currently the most promising technology for supporting the increasing demand of data traffic in wireless networks. Femtocells provide an opportunity for enabling innovative mobile applications and services in home and office environments. Femtocell Communications and Technologies: Business Opportunities and Deployment Challenges is an extensive and thoroughly revised version of a collection of review and research based chapters on femtocell technology. This work focuses on mobility and security in femtocell, cognitive femtocell, and standardization and deployment scenarios. Several crucial topics addressed in this book are interference mitigation techniques, network integration option, cognitive optimization, and economic incentives to install femtocells that may have a larger impact on their ultimate success. The book is optimized for use by graduate researchers who are familiar with the fundamentals of wireless communication and cellular concepts
    corecore