154,224 research outputs found

    Dynamics in near-potential games

    Get PDF
    We consider discrete-time learning dynamics in finite strategic form games, and show that games that are close to a potential game inherit many of the dynamical properties of potential games. We first study the evolution of the sequence of pure strategy profiles under better/best response dynamics. We show that this sequence converges to a (pure) approximate equilibrium set whose size is a function of the “distance” to a given nearby potential game. We then focus on logit response dynamics, and provide a characterization of the limiting outcome in terms of the distance of the game to a given potential game and the corresponding potential function. Finally, we turn attention to fictitious play, and establish that in near-potential games the sequence of empirical frequencies of player actions converges to a neighborhood of (mixed) equilibria, where the size of the neighborhood increases according to the distance to the set of potential games

    Dynamics in Near-Potential Games

    Full text link
    Except for special classes of games, there is no systematic framework for analyzing the dynamical properties of multi-agent strategic interactions. Potential games are one such special but restrictive class of games that allow for tractable dynamic analysis. Intuitively, games that are "close" to a potential game should share similar properties. In this paper, we formalize and develop this idea by quantifying to what extent the dynamic features of potential games extend to "near-potential" games. We study convergence of three commonly studied classes of adaptive dynamics: discrete-time better/best response, logit response, and discrete-time fictitious play dynamics. For better/best response dynamics, we focus on the evolution of the sequence of pure strategy profiles and show that this sequence converges to a (pure) approximate equilibrium set, whose size is a function of the "distance" from a close potential game. We then study logit response dynamics and provide a characterization of the stationary distribution of this update rule in terms of the distance of the game from a close potential game and the corresponding potential function. We further show that the stochastically stable strategy profiles are pure approximate equilibria. Finally, we turn attention to fictitious play, and establish that the sequence of empirical frequencies of player actions converges to a neighborhood of (mixed) equilibria of the game, where the size of the neighborhood increases with distance of the game to a potential game. Thus, our results suggest that games that are close to a potential game inherit the dynamical properties of potential games. Since a close potential game to a given game can be found by solving a convex optimization problem, our approach also provides a systematic framework for studying convergence behavior of adaptive learning dynamics in arbitrary finite strategic form games.Comment: 42 pages, 8 figure

    Robustness of Dynamics in Games: A Contraction Mapping Decomposition Approach

    Full text link
    A systematic framework for analyzing dynamical attributes of games has not been well-studied except for the special class of potential or near-potential games. In particular, the existing results have shortcomings in determining the asymptotic behavior of a given dynamic in a designated game. Although there is a large body literature on developing convergent dynamics to the Nash equilibrium (NE) of a game, in general, the asymptotic behavior of an underlying dynamic may not be even close to a NE. In this paper, we initiate a new direction towards game dynamics by studying the fundamental properties of the map of dynamics in games. To this aim, we first decompose the map of a given dynamic into contractive and non-contractive parts and then explore the asymptotic behavior of those dynamics using the proximity of such decomposition to contraction mappings. In particular, we analyze the non-contractive behavior for better/best response dynamics in discrete-action space sequential/repeated games and show that the non-contractive part of those dynamics is well-behaved in a certain sense. That allows us to estimate the asymptotic behavior of such dynamics using a neighborhood around the fixed point of their contractive part proxy. Finally, we demonstrate the practicality of our framework via an example from duopoly Cournot games

    Quadratic Mean Field Games

    Full text link
    Mean field games were introduced independently by J-M. Lasry and P-L. Lions, and by M. Huang, R.P. Malham\'e and P. E. Caines, in order to bring a new approach to optimization problems with a large number of interacting agents. The description of such models split in two parts, one describing the evolution of the density of players in some parameter space, the other the value of a cost functional each player tries to minimize for himself, anticipating on the rational behavior of the others. Quadratic Mean Field Games form a particular class among these systems, in which the dynamics of each player is governed by a controlled Langevin equation with an associated cost functional quadratic in the control parameter. In such cases, there exists a deep relationship with the non-linear Schr\"odinger equation in imaginary time, connexion which lead to effective approximation schemes as well as a better understanding of the behavior of Mean Field Games. The aim of this paper is to serve as an introduction to Quadratic Mean Field Games and their connexion with the non-linear Schr\"odinger equation, providing to physicists a good entry point into this new and exciting field.Comment: 62 pages, 4 figure

    Differentiable Game Mechanics

    Get PDF
    Deep learning is built on the foundational guarantee that gradient descent on an objective function converges to local minima. Unfortunately, this guarantee fails in settings, such as generative adversarial nets, that exhibit multiple interacting losses. The behavior of gradient-based methods in games is not well understood -- and is becoming increasingly important as adversarial and multi-objective architectures proliferate. In this paper, we develop new tools to understand and control the dynamics in n-player differentiable games. The key result is to decompose the game Jacobian into two components. The first, symmetric component, is related to potential games, which reduce to gradient descent on an implicit function. The second, antisymmetric component, relates to Hamiltonian games, a new class of games that obey a conservation law akin to conservation laws in classical mechanical systems. The decomposition motivates Symplectic Gradient Adjustment (SGA), a new algorithm for finding stable fixed points in differentiable games. Basic experiments show SGA is competitive with recently proposed algorithms for finding stable fixed points in GANs -- while at the same time being applicable to, and having guarantees in, much more general cases.Comment: JMLR 2019, journal version of arXiv:1802.0564

    Inertial game dynamics and applications to constrained optimization

    Get PDF
    Aiming to provide a new class of game dynamics with good long-term rationality properties, we derive a second-order inertial system that builds on the widely studied "heavy ball with friction" optimization method. By exploiting a well-known link between the replicator dynamics and the Shahshahani geometry on the space of mixed strategies, the dynamics are stated in a Riemannian geometric framework where trajectories are accelerated by the players' unilateral payoff gradients and they slow down near Nash equilibria. Surprisingly (and in stark contrast to another second-order variant of the replicator dynamics), the inertial replicator dynamics are not well-posed; on the other hand, it is possible to obtain a well-posed system by endowing the mixed strategy space with a different Hessian-Riemannian (HR) metric structure, and we characterize those HR geometries that do so. In the single-agent version of the dynamics (corresponding to constrained optimization over simplex-like objects), we show that regular maximum points of smooth functions attract all nearby solution orbits with low initial speed. More generally, we establish an inertial variant of the so-called "folk theorem" of evolutionary game theory and we show that strict equilibria are attracting in asymmetric (multi-population) games - provided of course that the dynamics are well-posed. A similar asymptotic stability result is obtained for evolutionarily stable strategies in symmetric (single- population) games.Comment: 30 pages, 4 figures; significantly revised paper structure and added new material on Euclidean embeddings and evolutionarily stable strategie

    Irrational behavior in the Brown - von Neuman - Nash dynamics

    Get PDF
    We present a class of games with a pure strategy being strictly dominated by an- other pure strategy such that the former survives along solutions of the Brown - von Neumann - Nash dynamics from an open set of initial conditions
    corecore