19 research outputs found

    Alternating the Population and Control Neural Networks to Solve High-Dimensional Stochastic Mean-Field Games

    Full text link
    We present APAC-Net, an alternating population and agent control neural network for solving stochastic mean field games (MFGs). Our algorithm is geared toward high-dimensional instances of MFGs that are beyond reach with existing solution methods. We achieve this in two steps. First, we take advantage of the underlying variational primal-dual structure that MFGs exhibit and phrase it as a convex-concave saddle point problem. Second, we parameterize the value and density functions by two neural networks, respectively. By phrasing the problem in this manner, solving the MFG can be interpreted as a special case of training a generative adversarial network (GAN). We show the potential of our method on up to 100-dimensional MFG problems

    Deep Learning for Mean Field Games with non-separable Hamiltonians

    Full text link
    This paper introduces a new method based on Deep Galerkin Methods (DGMs) for solving high-dimensional stochastic Mean Field Games (MFGs). We achieve this by using two neural networks to approximate the unknown solutions of the MFG system and forward-backward conditions. Our method is efficient, even with a small number of iterations, and is capable of handling up to 300 dimensions with a single layer, which makes it faster than other approaches. In contrast, methods based on Generative Adversarial Networks (GANs) cannot solve MFGs with non-separable Hamiltonians. We demonstrate the effectiveness of our approach by applying it to a traffic flow problem, which was previously solved using the Newton iteration method only in the deterministic case. We compare the results of our method to analytical solutions and previous approaches, showing its efficiency. We also prove the convergence of our neural network approximation with a single hidden layer using the universal approximation theorem

    Machine Learning architectures for price formation models

    Full text link
    Here, we study machine learning (ML) architectures to solve a mean-field games (MFGs) system arising in price formation models. We formulate a training process that relies on a min-max characterization of the optimal control and price variables. Our main theoretical contribution is the development of a posteriori estimates as a tool to evaluate the convergence of the training process. We illustrate our results with numerical experiments for a linear-quadratic model

    Scaling up Mean Field Games with Online Mirror Descent

    Full text link
    We address scaling up equilibrium computation in Mean Field Games (MFGs) using Online Mirror Descent (OMD). We show that continuous-time OMD provably converges to a Nash equilibrium under a natural and well-motivated set of monotonicity assumptions. This theoretical result nicely extends to multi-population games and to settings involving common noise. A thorough experimental investigation on various single and multi-population MFGs shows that OMD outperforms traditional algorithms such as Fictitious Play (FP). We empirically show that OMD scales up and converges significantly faster than FP by solving, for the first time to our knowledge, examples of MFGs with hundreds of billions states. This study establishes the state-of-the-art for learning in large-scale multi-agent and multi-population games

    Normalia [October 1898]

    Get PDF
    St. Cloud State University student newspaper, October 1898https://repository.stcloudstate.edu/normalia/1060/thumbnail.jp
    corecore