114,605 research outputs found

    Quadratic Mean Field Games

    Full text link
    Mean field games were introduced independently by J-M. Lasry and P-L. Lions, and by M. Huang, R.P. Malham\'e and P. E. Caines, in order to bring a new approach to optimization problems with a large number of interacting agents. The description of such models split in two parts, one describing the evolution of the density of players in some parameter space, the other the value of a cost functional each player tries to minimize for himself, anticipating on the rational behavior of the others. Quadratic Mean Field Games form a particular class among these systems, in which the dynamics of each player is governed by a controlled Langevin equation with an associated cost functional quadratic in the control parameter. In such cases, there exists a deep relationship with the non-linear Schr\"odinger equation in imaginary time, connexion which lead to effective approximation schemes as well as a better understanding of the behavior of Mean Field Games. The aim of this paper is to serve as an introduction to Quadratic Mean Field Games and their connexion with the non-linear Schr\"odinger equation, providing to physicists a good entry point into this new and exciting field.Comment: 62 pages, 4 figure

    An Extended Mean Field Game for Storage in Smart Grids

    Full text link
    We consider a stylized model for a power network with distributed local power generation and storage. This system is modeled as network connection a large number of nodes, where each node is characterized by a local electricity consumption, has a local electricity production (e.g. photovoltaic panels), and manages a local storage device. Depending on its instantaneous consumption and production rates as well as its storage management decision, each node may either buy or sell electricity, impacting the electricity spot price. The objective at each node is to minimize energy and storage costs by optimally controlling the storage device. In a non-cooperative game setting, we are led to the analysis of a non-zero sum stochastic game with NN players where the interaction takes place through the spot price mechanism. For an infinite number of agents, our model corresponds to an Extended Mean-Field Game (EMFG). In a linear quadratic setting, we obtain and explicit solution to the EMFG, we show that it provides an approximate Nash-equilibrium for NN-player game, and we compare this solution to the optimal strategy of a central planner.Comment: 27 pages, 5 figures. arXiv admin note: text overlap with arXiv:1607.02130 by other author

    Viability analysis of the first-order mean field games

    Full text link
    The paper is concerned with the dependence of the solution of the deterministic mean field game on the initial distribution of players. The main object of study is the mapping which assigns to the initial time and the initial distribution of players the set of expected rewards of the representative player corresponding to solutions of mean field game. This mapping can be regarded as a value multifunction. We obtain the sufficient condition for a multifunction to be a value multifunction. It states that if a multifunction is viable with respect to the dynamics generated by the original mean field game, then it is a value multifunction. Furthermore, the infinitesimal variant of this condition is derived.Comment: 35 page

    Mean-field-game model for Botnet defense in Cyber-security

    Full text link
    We initiate the analysis of the response of computer owners to various offers of defence systems against a cyber-hacker (for instance, a botnet attack), as a stochastic game of a large number of interacting agents. We introduce a simple mean-field game that models their behavior. It takes into account both the random process of the propagation of the infection (controlled by the botner herder) and the decision making process of customers. Its stationary version turns out to be exactly solvable (but not at all trivial) under an additional natural assumption that the execution time of the decisions of the customers (say, switch on or out the defence system) is much faster that the infection rates

    AI for Classic Video Games using Reinforcement Learning

    Get PDF
    Deep reinforcement learning is a technique to teach machines tasks based on trial and error experiences in the way humans learn. In this paper, some preliminary research is done to understand how reinforcement learning and deep learning techniques can be combined to train an agent to play Archon, a classic video game. We compare two methods to estimate a Q function, the function used to compute the best action to take at each point in the game. In the first approach, we used a Q table to store the states and weights of the corresponding actions. In our experiments, this method converged very slowly. Our second approach was similar to that of [1]: We used a convolutional neural network (CNN) to determine a Q function. This deep neural network model successfully learnt to control the Archon player using keyboard event that it generated. We observed that the second approaches Q function converged faster than the first. For the latter method, the neural net was trained only using prediodic screenshots taken while it was playing. Experiments were conducted on a machine that did not have a GPU, so our training was slower as compared to [1]

    Mean-Field-Type Games in Engineering

    Full text link
    A mean-field-type game is a game in which the instantaneous payoffs and/or the state dynamics functions involve not only the state and the action profile but also the joint distributions of state-action pairs. This article presents some engineering applications of mean-field-type games including road traffic networks, multi-level building evacuation, millimeter wave wireless communications, distributed power networks, virus spread over networks, virtual machine resource management in cloud networks, synchronization of oscillators, energy-efficient buildings, online meeting and mobile crowdsensing.Comment: 84 pages, 24 figures, 183 references. to appear in AIMS 201
    corecore