141 research outputs found
A Probabilistic Approach to Mean Field Games with Major and Minor Players
We propose a new approach to mean field games with major and minor players.
Our formulation involves a two player game where the optimization of the
representative minor player is standard while the major player faces an
optimization over conditional McKean-Vlasov stochastic differential equations.
The definition of this limiting game is justified by proving that its solution
provides approximate Nash equilibriums for large finite player games. This
proof depends upon the generalization of standard results on the propagation of
chaos to conditional dynamics. Because it is on independent interest, we prove
this generalization in full detail. Using a conditional form of the Pontryagin
stochastic maximum principle (proven in the appendix), we reduce the solution
of the mean field game to a forward-backward system of stochastic differential
equations of the conditional McKean-Vlasov type, which we solve in the Linear
Quadratic setting. We use this class of models to show that Nash equilibriums
in our formulation can be different from those of the formulations contemplated
so far in the literature
LQG Mean Field Games with a Major Agent: Nash Certainty Equivalence versus Probabilistic Approach
Mean field game systems consisting of a major agent and a large population of
minor agents were introduced in (Huang, 2010) in an LQG setup. In the past
years several approaches towards major-minor mean field games have been
developed, principally (i) the Nash certainty equivalence (Huang, 2010), (ii)
master equations, (iii) asymptotic solvability, and (iv) the probabilistic
approach. In a recent work (Huang, 2020), for the LQG case the equivalence of
the solutions obtained via approaches (i)-(iii) was established. In this work
we demonstrate that the closed-loop Nash equilibrium derived in the
infinite-population limit through approaches (i) and (iv) are identical
A Class of Mean-field LQG Games with Partial Information
The large-population system consists of considerable small agents whose
individual behavior and mass effect are interrelated via their state-average.
The mean-field game provides an efficient way to get the decentralized
strategies of large-population system when studying its dynamic optimizations.
Unlike other large-population literature, this current paper possesses the
following distinctive features. First, our setting includes the partial
information structure of large-population system which is practical from real
application standpoint. Specially, two cases of partial information structure
are considered here: the partial filtration case (see Section 2, 3) where the
available information to agents is the filtration generated by an observable
component of underlying Brownian motion; the noisy observation case (Section 4)
where the individual agent can access an additive white-noise observation on
its own state. Also, it is new in filtering modeling that our sensor function
may depend on the state-average. Second, in both cases, the limiting
state-averages become random and the filtering equations to individual state
should be formalized to get the decentralized strategies. Moreover, it is also
new that the limit average of state filters should be analyzed here. This makes
our analysis very different to the full information arguments of
large-population system. Third, the consistency conditions are equivalent to
the wellposedness of some Riccati equations, and do not involve the fixed-point
analysis as in other mean-field games. The -Nash equilibrium
properties are also presented.Comment: 19 page
- …