We design and analyze reinforcement learning algorithms for Graphon
Mean-Field Games (GMFGs). In contrast to previous works that require the
precise values of the graphons, we aim to learn the Nash Equilibrium (NE) of
the regularized GMFGs when the graphons are unknown. Our contributions are
threefold. First, we propose the Proximal Policy Optimization for GMFG
(GMFG-PPO) algorithm and show that it converges at a rate of O(Tβ1/3)
after T iterations with an estimation oracle, improving on a previous work by
Xie et al. (ICML, 2021). Second, using kernel embedding of distributions, we
design efficient algorithms to estimate the transition kernels, reward
functions, and graphons from sampled agents. Convergence rates are then derived
when the positions of the agents are either known or unknown. Results for the
combination of the optimization algorithm GMFG-PPO and the estimation algorithm
are then provided. These algorithms are the first specifically designed for
learning graphons from sampled agents. Finally, the efficacy of the proposed
algorithms are corroborated through simulations. These simulations demonstrate
that learning the unknown graphons reduces the exploitability effectively