11 research outputs found

    Learning in nonatomic games, Part I: Finite action spaces and population games

    Get PDF
    We examine the long-run behavior of a wide range of dynamics for learning in nonatomic games, in both discrete and continuous time. The class of dynamics under consideration includes fictitious play and its regularized variants, the best-reply dynamics (again, possibly regularized), as well as the dynamics of dual averaging / "follow the regularized leader" (which themselves include as special cases the replicator dynamics and Friedman's projection dynamics). Our analysis concerns both the actual trajectory of play and its time-average, and we cover potential and monotone games, as well as games with an evolutionarily stable state (global or otherwise). We focus exclusively on games with finite action spaces; nonatomic games with continuous action spaces are treated in detail in Part II of this paper.Comment: 27 page

    Apprentissage dans les jeux à champ moyen

    Full text link
    Mean Field Games (MFG) are a class of differential games in which each agent is infinitesimal and interacts with a huge population of other agents. In this thesis, we raise the question of the actual formation of the MFG equilibrium. Indeed, the game being quite involved, it is unrealistic to assume that the agents can compute the equilibrium configuration. This seems to indicate that, if the equilibrium configuration arises, it is because the agents have learned how to play the game. Hence the main question is to find learning procedures in mean field games and investigating if they converge to an equilibrium. We have inspired from the learning schemes in static games and tried to apply them to our dynamical model of MFG. We especially focus on fictitious play and online mirror descent applications on different types of mean field games; those are either Potential, Monotone or Discrete.Les jeux à champ moyen (MFG) sont une classe de jeux différentiels dans lequel chaque agent est infinitésimal et interagit avec une énorme population d'agents. Dans cette thèse, nous soulevons la question de la formation effective de l'équilibre MFG. En effet, le jeu étant très complexe, il est irréaliste de supposer que les agents peuvent réellement calculer la configuration d'équilibre. Cela semble indiquer que si la configuration d'équilibre se présente, c'est parce que les agents ont appris à jouer au jeu. Donc, la question principale est de trouver des procédures d'apprentissage dans les jeux à champ moyen et d'analyser leurs convergences vers un équilibre. Nous nous sommes inspirés par des schémas d'apprentissage dans les jeux statiques et avons essayé de les appliquer à notre modèle dynamique de MFG. Nous nous concentrons particulièrement sur les applications de fictitious play et online mirror descent sur différents types de jeux de champs moyens : Potentiel, Monotone ou Discret

    Learning in nonatomic anonymous games with applications to first-order mean field games

    Full text link
    We introduce a model of anonymous games with the player dependent action sets. We propose several learning procedures based on the well-known Fictitious Play and Online Mirror Descent and prove their convergence to equilibrium under the classical monotonicity condition. Typical examples are first-order mean field games

    Learning in Mean Field Games

    Full text link
    Les jeux à champ moyen (MFG) sont une classe de jeux différentiels dans lequel chaque agent est infinitésimal et interagit avec une énorme population d'agents. Dans cette thèse, nous soulevons la question de la formation effective de l'équilibre MFG. En effet, le jeu étant très complexe, il est irréaliste de supposer que les agents peuvent réellement calculer la configuration d'équilibre. Cela semble indiquer que si la configuration d'équilibre se présente, c'est parce que les agents ont appris à jouer au jeu. Donc, la question principale est de trouver des procédures d'apprentissage dans les jeux à champ moyen et d'analyser leurs convergences vers un équilibre. Nous nous sommes inspirés par des schémas d'apprentissage dans les jeux statiques et avons essayé de les appliquer à notre modèle dynamique de MFG. Nous nous concentrons particulièrement sur les applications de fictitious play et online mirror descent sur différents types de jeux de champs moyens : Potentiel, Monotone ou Discret.Mean Field Games (MFG) are a class of differential games in which each agent is infinitesimal and interacts with a huge population of other agents. In this thesis, we raise the question of the actual formation of the MFG equilibrium. Indeed, the game being quite involved, it is unrealistic to assume that the agents can compute the equilibrium configuration. This seems to indicate that, if the equilibrium configuration arises, it is because the agents have learned how to play the game. Hence the main question is to find learning procedures in mean field games and investigating if they converge to an equilibrium. We have inspired from the learning schemes in static games and tried to apply them to our dynamical model of MFG. We especially focus on fictitious play and online mirror descent applications on different types of mean field games; those are either Potential, Monotone or Discrete

    Learning in mean field games: The fictitious play

    Get PDF
    International audienceMean Field Game systems describe equilibrium configurations in differential games with infinitely many infinitesimal interacting agents. We introduce a learning procedure (similar to the Fictitious Play) for these games and show its convergence when the Mean Field Game is potential

    Finite mean field games: fictitious play and convergence to a first order continuous mean field game

    Get PDF
    In this article we consider finite Mean Field Games (MFGs), i.e. with finite time and finite states. We adopt the framework introduced in [15] and study two seemly unexplored subjects. In the first one, we analyze the convergence of the fictitious play learning procedure, inspired by the results in continuous MFGs (see [12] and [19]). In the second one, we consider the relation of some finite MFGs and continuous first order MFGs. Namely, given a continuous first order MFG problem and a sequence of refined space/time grids, we construct a sequence finite MFGs whose solutions admit limits points and every such limit point solves the continuous first order MFG problem

    Learning in nonatomic games, part Ⅰ: Finite action spaces and population games

    Get PDF
    &lt;p style='text-indent:20px;'&gt;We examine the long-run behavior of a wide range of dynamics for learning in nonatomic games, in both discrete and continuous time. The class of dynamics under consideration includes fictitious play and its regularized variants, the best reply dynamics (again, possibly regularized), as well as the dynamics of dual averaging / "follow the regularized leader" (which themselves include as special cases the replicator dynamics and Friedman's projection dynamics). Our analysis concerns both the actual trajectory of play and its time-average, and we cover potential and monotone games, as well as games with an evolutionarily stable state (global or otherwise). We focus exclusively on games with finite action spaces; nonatomic games with continuous action spaces are treated in detail in Part Ⅱ.&lt;/p&gt;</jats:p

    Schauder Estimates for a Class of Potential Mean Field Games of Controls

    Get PDF
    International audienceAn existence result for a class of mean field games of controls is provided. In the considered model, the cost functional to be minimized by each agent involves a price depending at a given time on the controls of all agents and a congestion term. The existence of a classical solution is demonstrated with the Leray-Schauder theorem; the proof relies in particular on a priori bounds for the solution, which are obtained with the help of a potential formulation of the problem
    corecore