Model-Free Algorithm with Improved Sample Efficiency for Zero-Sum Markov Games

Abstract

The problem of two-player zero-sum Markov games has recently attracted increasing interests in theoretical studies of multi-agent reinforcement learning (RL). In particular, for finite-horizon episodic Markov decision processes (MDPs), it has been shown that model-based algorithms can find an ϵ\epsilon-optimal Nash Equilibrium (NE) with the sample complexity of O(H3SAB/ϵ2)O(H^3SAB/\epsilon^2), which is optimal in the dependence of the horizon HH and the number of states SS (where AA and BB denote the number of actions of the two players, respectively). However, none of the existing model-free algorithms can achieve such an optimality. In this work, we propose a model-free stage-based Q-learning algorithm and show that it achieves the same sample complexity as the best model-based algorithm, and hence for the first time demonstrate that model-free algorithms can enjoy the same optimality in the HH dependence as model-based algorithms. The main improvement of the dependency on HH arises by leveraging the popular variance reduction technique based on the reference-advantage decomposition previously used only for single-agent RL. However, such a technique relies on a critical monotonicity property of the value function, which does not hold in Markov games due to the update of the policy via the coarse correlated equilibrium (CCE) oracle. Thus, to extend such a technique to Markov games, our algorithm features a key novel design of updating the reference value functions as the pair of optimistic and pessimistic value functions whose value difference is the smallest in the history in order to achieve the desired improvement in the sample efficiency

    Similar works

    Full text

    thumbnail-image

    Available Versions