117 research outputs found

    Remarks on Nash equilibria in mean field game models with a major player

    Get PDF
    For a mean field game model with a major and infinite minor players, we characterize a notion of Nash equilibrium via a system of so-called master equations, namely a system of nonlinear transport equations in the space of measures. Then, for games with a finite number N of minor players and a major player, we prove that the solution of the corresponding Nash system converges to the solution of the system of master equations as N tends to infinity

    A Class of Mean-field LQG Games with Partial Information

    Full text link
    The large-population system consists of considerable small agents whose individual behavior and mass effect are interrelated via their state-average. The mean-field game provides an efficient way to get the decentralized strategies of large-population system when studying its dynamic optimizations. Unlike other large-population literature, this current paper possesses the following distinctive features. First, our setting includes the partial information structure of large-population system which is practical from real application standpoint. Specially, two cases of partial information structure are considered here: the partial filtration case (see Section 2, 3) where the available information to agents is the filtration generated by an observable component of underlying Brownian motion; the noisy observation case (Section 4) where the individual agent can access an additive white-noise observation on its own state. Also, it is new in filtering modeling that our sensor function may depend on the state-average. Second, in both cases, the limiting state-averages become random and the filtering equations to individual state should be formalized to get the decentralized strategies. Moreover, it is also new that the limit average of state filters should be analyzed here. This makes our analysis very different to the full information arguments of large-population system. Third, the consistency conditions are equivalent to the wellposedness of some Riccati equations, and do not involve the fixed-point analysis as in other mean-field games. The ϵ\epsilon-Nash equilibrium properties are also presented.Comment: 19 page

    LQG Risk-Sensitive Mean Field Games with a Major Agent: A Variational Approach

    Full text link
    Risk sensitivity plays an important role in the study of finance and economics as risk-neutral models cannot capture and justify all economic behaviors observed in reality. Risk-sensitive mean field game theory was developed recently for systems where there exists a large number of indistinguishable, asymptotically negligible and heterogeneous risk-sensitive players, who are coupled via the empirical distribution of state across population. In this work, we extend the theory of Linear Quadratic Gaussian risk-sensitive mean-field games to the setup where there exists one major agent as well as a large number of minor agents. The major agent has a significant impact on each minor agent and its impact does not collapse with the increase in the number of minor agents. Each agent is subject to linear dynamics with an exponential-of-integral quadratic cost functional. Moreover, all agents interact via the average state of minor agents (so-called empirical mean field) and the major agent's state. We develop a variational analysis approach to derive the best response strategies of agents in the limiting case where the number of agents goes to infinity. We establish that the set of obtained best-response strategies yields a Nash equilibrium in the limiting case and an ε\varepsilon-Nash equilibrium in the finite player case. We conclude the paper with an illustrative example

    Mean Field Games in a Stackelberg problem with an informed major player

    Full text link
    We investigate a stochastic differential game in which a major player has a private information (the knowledge of a random variable), which she discloses through her control to a population of small players playing in a Nash Mean Field Game equilibrium. The major player's cost depends on the distribution of the population, while the cost of the population depends on the random variable known by the major player. We show that the game has a relaxed solution and that the optimal control of the major player is approximatively optimal in games with a large but finite number of small players

    Partially Observed Discrete-Time Risk-Sensitive Mean Field Games

    Full text link
    In this paper, we consider discrete-time partially observed mean-field games with the risk-sensitive optimality criterion. We introduce risk-sensitivity behaviour for each agent via an exponential utility function. In the game model, each agent is weakly coupled with the rest of the population through its individual cost and state dynamics via the empirical distribution of states. We establish the mean-field equilibrium in the infinite-population limit using the technique of converting the underlying original partially observed stochastic control problem to a fully observed one on the belief space and the dynamic programming principle. Then, we show that the mean-field equilibrium policy, when adopted by each agent, forms an approximate Nash equilibrium for games with sufficiently many agents. We first consider finite-horizon cost function, and then, discuss extension of the result to infinite-horizon cost in the next-to-last section of the paper.Comment: 29 pages. arXiv admin note: substantial text overlap with arXiv:1705.02036, arXiv:1808.0392
    corecore