Improved Distributed Algorithms for Random Colorings

Abstract

Markov Chain Monte Carlo (MCMC) algorithms are a widely-used algorithmic tool for sampling from high-dimensional distributions, a notable example is the equilibirum distribution of graphical models. The Glauber dynamics, also known as the Gibbs sampler, is the simplest example of an MCMC algorithm; the transitions of the chain update the configuration at a randomly chosen coordinate at each step. Several works have studied distributed versions of the Glauber dynamics and we extend these efforts to a more general family of Markov chains. An important combinatorial problem in the study of MCMC algorithms is random colorings. Given a graph GG of maximum degree Δ\Delta and an integer kΔ+1k\geq\Delta+1, the goal is to generate a random proper vertex kk-coloring of GG. Jerrum (1995) proved that the Glauber dynamics has O(nlogn)O(n\log{n}) mixing time when k>2Δk>2\Delta. Fischer and Ghaffari (2018), and independently Feng, Hayes, and Yin (2018), presented a parallel and distributed version of the Glauber dynamics which converges in O(logn)O(\log{n}) rounds for k>(2+ε)Δk>(2+\varepsilon)\Delta for any ε>0\varepsilon>0. We improve this result to k>(11/6δ)Δk>(11/6-\delta)\Delta for a fixed δ>0\delta>0. This matches the state of the art for randomly sampling colorings of general graphs in the sequential setting. Whereas previous works focused on distributed variants of the Glauber dynamics, our work presents a parallel and distributed version of the more general flip dynamics presented by Vigoda (2000) (and refined by Chen, Delcourt, Moitra, Perarnau, and Postle (2019)), which recolors local maximal two-colored components in each step.Comment: 25 pages, 2 figure

    Similar works

    Full text

    thumbnail-image

    Available Versions