This paper investigates posterior sampling algorithms for competitive
reinforcement learning (RL) in the context of general function approximations.
Focusing on zero-sum Markov games (MGs) under two critical settings, namely
self-play and adversarial learning, we first propose the self-play and
adversarial generalized eluder coefficient (GEC) as complexity measures for
function approximation, capturing the exploration-exploitation trade-off in
MGs. Based on self-play GEC, we propose a model-based self-play posterior
sampling method to control both players to learn Nash equilibrium, which can
successfully handle the partial observability of states. Furthermore, we
identify a set of partially observable MG models fitting MG learning with the
adversarial policies of the opponent. Incorporating the adversarial GEC, we
propose a model-based posterior sampling method for learning adversarial MG
with potential partial observability. We further provide low regret bounds for
proposed algorithms that can scale sublinearly with the proposed GEC and the
number of episodes T. To the best of our knowledge, we for the first time
develop generic model-based posterior sampling algorithms for competitive RL
that can be applied to a majority of tractable zero-sum MG classes in both
fully observable and partially observable MGs with self-play and adversarial
learning.Comment: NeurIPS 202