6,610 research outputs found
Ergodic Mean Field Games with H\"ormander diffusions
We prove existence of solutions for a class of systems of subelliptic PDEs
arising from Mean Field Game systems with H\"ormander diffusion. These results
are motivated by the feedback synthesis Mean Field Game solutions and the Nash
equilibria of a large class of -player differential games
Stochastic Differential Games and Energy-Efficient Power Control
One of the contributions of this work is to formulate the problem of
energy-efficient power control in multiple access channels (namely, channels
which comprise several transmitters and one receiver) as a stochastic
differential game. The players are the transmitters who adapt their power level
to the quality of their time-varying link with the receiver, their battery
level, and the strategy updates of the others. The proposed model not only
allows one to take into account long-term strategic interactions but also
long-term energy constraints. A simple sufficient condition for the existence
of a Nash equilibrium in this game is provided and shown to be verified in a
typical scenario. As the uniqueness and determination of equilibria are
difficult issues in general, especially when the number of players goes large,
we move to two special cases: the single player case which gives us some useful
insights of practical interest and allows one to make connections with the case
of large number of players. The latter case is treated with a mean-field game
approach for which reasonable sufficient conditions for convergence and
uniqueness are provided. Remarkably, this recent approach for large system
analysis shows how scalability can be dealt with in large games and only relies
on the individual state information assumption.Comment: The final publication is available at
http://www.springerlink.com/openurl.asp?genre=article\&id=doi:10.1007/s13235-012-0068-
Quadratic Mean Field Games
Mean field games were introduced independently by J-M. Lasry and P-L. Lions,
and by M. Huang, R.P. Malham\'e and P. E. Caines, in order to bring a new
approach to optimization problems with a large number of interacting agents.
The description of such models split in two parts, one describing the evolution
of the density of players in some parameter space, the other the value of a
cost functional each player tries to minimize for himself, anticipating on the
rational behavior of the others.
Quadratic Mean Field Games form a particular class among these systems, in
which the dynamics of each player is governed by a controlled Langevin equation
with an associated cost functional quadratic in the control parameter. In such
cases, there exists a deep relationship with the non-linear Schr\"odinger
equation in imaginary time, connexion which lead to effective approximation
schemes as well as a better understanding of the behavior of Mean Field Games.
The aim of this paper is to serve as an introduction to Quadratic Mean Field
Games and their connexion with the non-linear Schr\"odinger equation, providing
to physicists a good entry point into this new and exciting field.Comment: 62 pages, 4 figure
Game-theoretical control with continuous action sets
Motivated by the recent applications of game-theoretical learning techniques
to the design of distributed control systems, we study a class of control
problems that can be formulated as potential games with continuous action sets,
and we propose an actor-critic reinforcement learning algorithm that provably
converges to equilibrium in this class of problems. The method employed is to
analyse the learning process under study through a mean-field dynamical system
that evolves in an infinite-dimensional function space (the space of
probability distributions over the players' continuous controls). To do so, we
extend the theory of finite-dimensional two-timescale stochastic approximation
to an infinite-dimensional, Banach space setting, and we prove that the
continuous dynamics of the process converge to equilibrium in the case of
potential games. These results combine to give a provably-convergent learning
algorithm in which players do not need to keep track of the controls selected
by the other agents.Comment: 19 page
Nash equilibria for non zero-sum ergodic stochastic differential games
In this paper we consider non zero-sum games where multiple players control
the drift of a process, and their payoffs depend on its ergodic behaviour. We
establish their connection with systems of Ergodic BSDEs, and prove the
existence of a Nash equilibrium under the generalised Isaac's conditions. We
also study the case of interacting players of different type
A probabilistic weak formulation of mean field games and applications
Mean field games are studied by means of the weak formulation of stochastic
optimal control. This approach allows the mean field interactions to enter
through both state and control processes and take a form which is general
enough to include rank and nearest-neighbor effects. Moreover, the data may
depend discontinuously on the state variable, and more generally its entire
history. Existence and uniqueness results are proven, along with a procedure
for identifying and constructing distributed strategies which provide
approximate Nash equlibria for finite-player games. Our results are applied to
a new class of multi-agent price impact models and a class of flocking models
for which we prove existence of equilibria
- …