Sampling from the {G}ibbs Distribution in Congestion Games

Abstract

Logit dynamics is a form of randomized game dynamics where players have a bias towards strategic deviations that give a higher improvement in cost. It is used extensively in practice. In congestion (or potential) games, the dynamics converges to the so-called Gibbs distribution over the set of all strategy profiles, when interpreted as a Markov chain. In general, logit dynamics might converge slowly to the Gibbs distribution, but beyond that, not much is known about their algorithmic aspects, nor that of the Gibbs distribution. In this work, we are interested in the following two questions for congestion games: i) Is there an efficient algorithm for sampling from the Gibbs distribution? ii) If yes, do there also exist natural randomized dynamics that converges quickly to the Gibbs distribution? We first study these questions in extension parallel congestion games, a well-studied special case of symmetric network congestion games. As our main result, we show that there is a simple variation on the logit dynamics (in which we in addition are allowed to randomly interchange the strategies of two players) that converges quickly to the Gibbs distribution in such games. This answers both questions above affirmatively. We also address the first question for the class of so-called capacitated kk-uniform congestion games. To prove our results, we rely on the recent breakthrough work of Anari, Liu, Oveis-Gharan and Vinzant (2019) concerning the approximate sampling of the base of a matroid according to strongly log-concave probability distribution

    Similar works