3,066 research outputs found

    Deep Q-Learning for Nash Equilibria: Nash-DQN

    Full text link
    Model-free learning for multi-agent stochastic games is an active area of research. Existing reinforcement learning algorithms, however, are often restricted to zero-sum games, and are applicable only in small state-action spaces or other simplified settings. Here, we develop a new data efficient Deep-Q-learning methodology for model-free learning of Nash equilibria for general-sum stochastic games. The algorithm uses a local linear-quadratic expansion of the stochastic game, which leads to analytically solvable optimal actions. The expansion is parametrized by deep neural networks to give it sufficient flexibility to learn the environment without the need to experience all state-action pairs. We study symmetry properties of the algorithm stemming from label-invariant stochastic games and as a proof of concept, apply our algorithm to learning optimal trading strategies in competitive electronic markets.Comment: 16 pages, 4 figure

    Translation invariant mean field games with common noise

    Full text link
    This note highlights a special class of mean field games in which the coefficients satisfy a convolution-type structural condition. A mean field game of this type with common noise is related to a certain mean field game without common noise by a simple transformation, which permits a tractable construction of a solution of the problem with common noise from a solution of the problem without

    Mean-Field-Type Games in Engineering

    Full text link
    A mean-field-type game is a game in which the instantaneous payoffs and/or the state dynamics functions involve not only the state and the action profile but also the joint distributions of state-action pairs. This article presents some engineering applications of mean-field-type games including road traffic networks, multi-level building evacuation, millimeter wave wireless communications, distributed power networks, virus spread over networks, virtual machine resource management in cloud networks, synchronization of oscillators, energy-efficient buildings, online meeting and mobile crowdsensing.Comment: 84 pages, 24 figures, 183 references. to appear in AIMS 201

    Game theory

    Get PDF
    game theory

    Mean Field Games and Applications.

    Get PDF
    This text is inspired from a “Cours Bachelier” held in January 2009 and taught by Jean-Michel Lasry. This course was based upon the articles of the three authors and upon unpublished materials they developed. Proofs were not presented during the conferences and are now available. So are some issues that were only rapidly tackled during class.Mean Field Games;

    On the convergence problem in Mean Field Games: a two state model without uniqueness

    Get PDF
    We consider N-player and mean field games in continuous time over a finite horizon, where the position of each agent belongs to {-1,1}. If there is uniqueness of mean field game solutions, e.g. under monotonicity assumptions, then the master equation possesses a smooth solution which can be used to prove convergence of the value functions and of the feedback Nash equilibria of the N-player game, as well as a propagation of chaos property for the associated optimal trajectories. We study here an example with anti-monotonous costs, and show that the mean field game has exactly three solutions. We prove that the value functions converge to the entropy solution of the master equation, which in this case can be written as a scalar conservation law in one space dimension, and that the optimal trajectories admit a limit: they select one mean field game soution, so there is propagation of chaos. Moreover, viewing the mean field game system as the necessary conditions for optimality of a deterministic control problem, we show that the N-player game selects the optimizer of this problem
    • …
    corecore