11 research outputs found
Rapid Mixing for Lattice Colorings with Fewer Colors
We provide an optimally mixing Markov chain for 6-colorings of the square
lattice on rectangular regions with free, fixed, or toroidal boundary
conditions. This implies that the uniform distribution on the set of such
colorings has strong spatial mixing, so that the 6-state Potts antiferromagnet
has a finite correlation length and a unique Gibbs measure at zero temperature.
Four and five are now the only remaining values of q for which it is not known
whether there exists a rapidly mixing Markov chain for q-colorings of the
square lattice.Comment: Appeared in Proc. LATIN 2004, to appear in JSTA
Stopping times, metrics and approximate counting.
In this paper we examine the importance of the choice of metric in path coupling, and its relationship to stopping time analysis. We give strong evidence that stopping time analysis is no more powerful than standard path coupling. In particular, we prove a stronger theorem for path coupling with stopping times, using a metric which allows us to analyse a one-step path coupling. This approach provides insight for the design of better metrics for specific problems. We give illustrative applications to hypergraph independent sets and SAT instances, hypergraph colourings and colourings of bipartite graphs, obtaining improved results for all these problems
Path coupling using stopping times.
We analyse the mixing time of Markov chains using path coupling with stopping times. We apply this approach to two hypergraph problems. We show that the Glauber dynamics for independent sets in a hypergraph mixes rapidly as long as the maximum degree Δ of a vertex and the minimum size m of an edge satisfy m ≥ 2Δ +1. We also state results that the Glauber dynamics for proper q-colourings of a hypergraph mixes rapidly if m ≥ 4 and q > Δ, and if m = 3 and q ≥1.65Δ. We give related results on the hardness of exact and approximate counting for both problems
Path coupling, Dobrushin uniqueness, and approximate counting
SIGLEAvailable from British Library Document Supply Centre-DSC:7769.555(97/04) / BLDSC - British Library Document Supply CentreGBUnited Kingdo
Faster random generation of linear extensions
SIGLEAvailable from British Library Document Supply Centre-DSC:7769.555(97/41) / BLDSC - British Library Document Supply CentreGBUnited Kingdo
Mathematical foundations of the Markov chain Monte Carlo method
7.2 was jointly undertaken with Vivek Gore, and is published here for the first time. I also thank an anonymous referee for carefully reading and providing helpful comments on a draft of this chapter. 1. Introduction The classical Monte Carlo method is an approach to estimating quantities that are hard to compute exactly. The quantity z of interest is expressed as the expectation z = ExpZ of a random variable (r.v.) Z for which some efficient sampling procedure is available. By taking the mean of some sufficiently large set of independent samples of Z, one may obtain an approximation to z. For example, suppose S = \Phi (x; y) 2 [0; 1] 2 : p i (x; y) 0; for all i \Psi<F1
Decentralized Dynamics for Finite Opinion Games
Game theory studies situations in which strategic players can modify the state of a given system, due to the absence of a central authority. Solution concepts, such as Nash equilibrium, are defined to predict the outcome of such situations. In the spirit of the field, we study the computation of solution concepts by means of decentralized dynamics. These are algorithms in which players move in turns to improve their own utility and the hope is that the system reaches an “equilibrium” quickly. We study these dynamics for the class of opinion games, recently introduced by [1]. These are games, important in economics and sociology, that model the formation of an opinion in a social network. We study best-response dynamics and show that the convergence to Nash equilibria is polynomial in the number of players. We also study a noisy version of best-response dynamics, called logit dynamics, and prove a host of results about its convergence rate as the noise in the system varies. To get these results, we use a variety of techniques developed to bound the mixing time of Markov chains, including coupling, spectral characterizations and bottleneck ratio