Although the field of multi-agent reinforcement learning (MARL) has made
considerable progress in the last years, solving systems with a large number of
agents remains a hard challenge. Graphon mean field games (GMFGs) enable the
scalable analysis of MARL problems that are otherwise intractable. By the
mathematical structure of graphons, this approach is limited to dense graphs
which are insufficient to describe many real-world networks such as power law
graphs. Our paper introduces a novel formulation of GMFGs, called LPGMFGs,
which leverages the graph theoretical concept of Lp graphons and provides a
machine learning tool to efficiently and accurately approximate solutions for
sparse network problems. This especially includes power law networks which are
empirically observed in various application areas and cannot be captured by
standard graphons. We derive theoretical existence and convergence guarantees
and give empirical examples that demonstrate the accuracy of our learning
approach for systems with many agents. Furthermore, we extend the Online Mirror
Descent (OMD) learning algorithm to our setup to accelerate learning speed,
empirically show its capabilities, and conduct a theoretical analysis using the
novel concept of smoothed step graphons. In general, we provide a scalable,
mathematically well-founded machine learning approach to a large class of
otherwise intractable problems of great relevance in numerous research fields.Comment: accepted for publication at the International Conference on
Artificial Intelligence and Statistics (AISTATS) 2023; code available at:
https://github.com/ChrFabian/Learning_sparse_GMFG