728 research outputs found

    Large-scale games in large-scale systems

    Full text link
    Many real-world problems modeled by stochastic games have huge state and/or action spaces, leading to the well-known curse of dimensionality. The complexity of the analysis of large-scale systems is dramatically reduced by exploiting mean field limit and dynamical system viewpoints. Under regularity assumptions and specific time-scaling techniques, the evolution of the mean field limit can be expressed in terms of deterministic or stochastic equation or inclusion (difference or differential). In this paper, we overview recent advances of large-scale games in large-scale systems. We focus in particular on population games, stochastic population games and mean field stochastic games. Considering long-term payoffs, we characterize the mean field systems using Bellman and Kolmogorov forward equations.Comment: 30 pages. Notes for the tutorial course on mean field stochastic games, March 201

    Mean Field Equilibrium in Dynamic Games with Complementarities

    Full text link
    We study a class of stochastic dynamic games that exhibit strategic complementarities between players; formally, in the games we consider, the payoff of a player has increasing differences between her own state and the empirical distribution of the states of other players. Such games can be used to model a diverse set of applications, including network security models, recommender systems, and dynamic search in markets. Stochastic games are generally difficult to analyze, and these difficulties are only exacerbated when the number of players is large (as might be the case in the preceding examples). We consider an approximation methodology called mean field equilibrium to study these games. In such an equilibrium, each player reacts to only the long run average state of other players. We find necessary conditions for the existence of a mean field equilibrium in such games. Furthermore, as a simple consequence of this existence theorem, we obtain several natural monotonicity properties. We show that there exist a "largest" and a "smallest" equilibrium among all those where the equilibrium strategy used by a player is nondecreasing, and we also show that players converge to each of these equilibria via natural myopic learning dynamics; as we argue, these dynamics are more reasonable than the standard best response dynamics. We also provide sensitivity results, where we quantify how the equilibria of such games move in response to changes in parameters of the game (e.g., the introduction of incentives to players).Comment: 56 pages, 5 figure

    Quadratic Mean Field Games

    Full text link
    Mean field games were introduced independently by J-M. Lasry and P-L. Lions, and by M. Huang, R.P. Malham\'e and P. E. Caines, in order to bring a new approach to optimization problems with a large number of interacting agents. The description of such models split in two parts, one describing the evolution of the density of players in some parameter space, the other the value of a cost functional each player tries to minimize for himself, anticipating on the rational behavior of the others. Quadratic Mean Field Games form a particular class among these systems, in which the dynamics of each player is governed by a controlled Langevin equation with an associated cost functional quadratic in the control parameter. In such cases, there exists a deep relationship with the non-linear Schr\"odinger equation in imaginary time, connexion which lead to effective approximation schemes as well as a better understanding of the behavior of Mean Field Games. The aim of this paper is to serve as an introduction to Quadratic Mean Field Games and their connexion with the non-linear Schr\"odinger equation, providing to physicists a good entry point into this new and exciting field.Comment: 62 pages, 4 figure

    Scaling up Mean Field Games with Online Mirror Descent

    Full text link
    We address scaling up equilibrium computation in Mean Field Games (MFGs) using Online Mirror Descent (OMD). We show that continuous-time OMD provably converges to a Nash equilibrium under a natural and well-motivated set of monotonicity assumptions. This theoretical result nicely extends to multi-population games and to settings involving common noise. A thorough experimental investigation on various single and multi-population MFGs shows that OMD outperforms traditional algorithms such as Fictitious Play (FP). We empirically show that OMD scales up and converges significantly faster than FP by solving, for the first time to our knowledge, examples of MFGs with hundreds of billions states. This study establishes the state-of-the-art for learning in large-scale multi-agent and multi-population games

    Evolutionary games on graphs

    Full text link
    Game theory is one of the key paradigms behind many scientific disciplines from biology to behavioral sciences to economics. In its evolutionary form and especially when the interacting agents are linked in a specific social network the underlying solution concepts and methods are very similar to those applied in non-equilibrium statistical physics. This review gives a tutorial-type overview of the field for physicists. The first three sections introduce the necessary background in classical and evolutionary game theory from the basic definitions to the most important results. The fourth section surveys the topological complications implied by non-mean-field-type social network structures in general. The last three sections discuss in detail the dynamic behavior of three prominent classes of models: the Prisoner's Dilemma, the Rock-Scissors-Paper game, and Competing Associations. The major theme of the review is in what sense and how the graph structure of interactions can modify and enrich the picture of long term behavioral patterns emerging in evolutionary games.Comment: Review, final version, 133 pages, 65 figure

    Deterministic Equations for Stochastic Spatial Evolutionary Games

    Get PDF
    Spatial evolutionary games model individuals who are distributed in a spatial domain and update their strategies upon playing a normal form game with their neighbors. We derive integro-differential equations as deterministic approximations of the microscopic updating stochastic processes. This generalizes the known mean-field ordinary differential equations and provide a powerful tool to investigate the spatial effects in populations evolution. The deterministic equations allow to identify many interesting features of the evolution of strategy profiles in a population, such as standing and traveling waves, and pattern formation, especially in replicator-type evolutions

    Hybrid Stochastic Systems: Numerical Methods, Limit Results, And Controls

    Get PDF
    This dissertation is concerned with the so-called stochastic hybrid systems, which are featured by the coexistence of continuous dynamics and discrete events and their interactions. Such systems have drawn much needed attentions in recent years. One of the main reasons is that such systems can be used to better reflect the reality for a wide range of applications in networked systems, communication systems, economic systems, cyber-physical systems, and biological and ecological systems, among others. Our main interest is centered around one class of such hybrid systems known as switching diffusions. In such a system, in addition to the driving force of a Brownian motion as in a stochastic system represented by a stochastic differential equation (SDE), there is an additional continuous-time switching process that models the environmental changes due to random events. In the first part, we develops numerical schemes for stochastic differential equations with Markovian switching (Markovian switching SDEs). By utilizing a special form of It^o\u27s formula for switching SDEs and special structural of the jumps of the switching component we derived a new scheme to simulate switching SDEs in the spirit of Milstein\u27s scheme for purely SDEs. We also develop a new approach to establish the convergence of the proposed algorithm that incorporates martingale methods, quadratic variations, and Markovian stopping times. Detailed and delicate analysis is carried out. Under suitable conditions which are natural extensions of the classical ones, the convergence of the algorithms is established. The rate of convergence is also ascertained. The second part is concerned with a limit theorem for general stochastic differential equations with Markovian regime switching. Given a sequence of stochastic regime switching systems where the discrete switching processes are independent of the state of the systems. The continuous-state component of these systems are governed by stochastic differential equations with driving processes that are continuous increasing processes and square integrable martingales. We establish the convergence of the sequence of systems to the one described by a state independent regime-switching diffusion process when the two driving processes converge to the usual time process and the Brownian motion in suitable sense. The third part is concerned with controlled hybrid systems that are good approximations to controlled switching diffusion processes. In lieu of a Brownian motion noise, we use a wide-band noise formulation, which facilitates the treatment of non-Markovian models. The wide-band noise is one whose spectrum has band width wide enough. We work with a basic stationary mixing type process. On top of this wide-band noise process, we allow the system to be subject to random discrete event influence. The discrete event process is a continuous time Markov chain with a finite state space. Although the state space is finite, we assume that the state space is rather large and the Markov chain is irreducible. Using a two-time-scale formulation and assuming the Markov chain also subjects to fast variations, using weak convergence and singular perturbation test function method we first proved that the when controlled by nearly optimal and equilibrium controls, the state and the corresponding costs of the original systems would converge to those of controlled diffusions systems. Using the limit controlled dynamic system as a guidance, we construct controls for the original problem and show that the controls so constructed are near optimal and nearly equilibrium
    corecore