This paper studies policy optimization algorithms for multi-agent
reinforcement learning. We begin by proposing an algorithm framework for
two-player zero-sum Markov Games in the full-information setting, where each
iteration consists of a policy update step at each state using a certain matrix
game algorithm, and a value update step with a certain learning rate. This
framework unifies many existing and new policy optimization algorithms. We show
that the state-wise average policy of this algorithm converges to an
approximate Nash equilibrium (NE) of the game, as long as the matrix game
algorithms achieve low weighted regret at each state, with respect to weights
determined by the speed of the value updates. Next, we show that this framework
instantiated with the Optimistic Follow-The-Regularized-Leader (OFTRL)
algorithm at each state (and smooth value updates) can find an
O(Tβ5/6) approximate NE in T iterations, and a
similar algorithm with slightly modified value update rule achieves a faster
O(Tβ1) convergence rate. These improve over the
current best O(Tβ1/2) rate of symmetric policy
optimization type algorithms. We also extend this algorithm to multi-player
general-sum Markov Games and show an O(Tβ3/4)
convergence rate to Coarse Correlated Equilibria (CCE). Finally, we provide a
numerical example to verify our theory and investigate the importance of smooth
value updates, and find that using "eager" value updates instead (equivalent to
the independent natural policy gradient algorithm) may significantly slow down
the convergence, even on a simple game with H=2 layers