54 research outputs found

    Making Markov chains less lazy

    Full text link
    The mixing time of an ergodic, reversible Markov chain can be bounded in terms of the eigenvalues of the chain: specifically, the second-largest eigenvalue and the smallest eigenvalue. It has become standard to focus only on the second-largest eigenvalue, by making the Markov chain "lazy". (A lazy chain does nothing at each step with probability at least 1/2, and has only nonnegative eigenvalues.) An alternative approach to bounding the smallest eigenvalue was given by Diaconis and Stroock and Diaconis and Saloff-Coste. We give examples to show that using this approach it can be quite easy to obtain a bound on the smallest eigenvalue of a combinatorial Markov chain which is several orders of magnitude below the best-known bound on the second-largest eigenvalue.Comment: 8 page

    A polynomial-time algorithm to approximately count contingency tables when the number of rows is constant.

    Get PDF
    AbstractWe consider the problem of counting the number of contingency tables with given row and column sums. This problem is known to be #P-complete, even when there are only two rows (Random Structures Algorithms 10(4) (1997) 487). In this paper we present the first fully polynomial randomized approximation scheme for counting contingency tables when the number of rows is constant. A novel feature of our algorithm is that it is a hybrid of an exact counting technique with an approximation algorithm, giving two distinct phases. In the first, the columns are partitioned into “small” and “large”. We show that the number of contingency tables can be expressed as the weighted sum of a polynomial number of new instances of the problem, where each instance consists of some new row sums and the original large column sums. In the second phase, we show how to approximately count contingency tables when all the column sums are large. In this case, we show that the solution lies in approximating the volume of a single convex body, a problem which is known to be solvable in polynomial time (J. ACM 38 (1) (1991) 1)

    Bounding spectral gaps of Markov chains: a novel exact multi-decomposition technique

    Full text link
    We propose an exact technique to calculate lower bounds of spectral gaps of discrete time reversible Markov chains on finite state sets. Spectral gaps are a common tool for evaluating convergence rates of Markov chains. As an illustration, we successfully use this technique to evaluate the ``absorption time'' of the ``Backgammon model'', a paradigmatic model for glassy dynamics. We also discuss the application of this technique to the ``Contingency table problem'', a notoriously difficult problem from probability theory. The interest of this technique is that it connects spectral gaps, which are quantities related to dynamics, with static quantities, calculated at equilibrium.Comment: To be submitted to J. Phys. A: Math. Ge

    The mixing time of the switch Markov chains: a unified approach

    Get PDF
    Since 1997 a considerable effort has been spent to study the mixing time of switch Markov chains on the realizations of graphic degree sequences of simple graphs. Several results were proved on rapidly mixing Markov chains on unconstrained, bipartite, and directed sequences, using different mechanisms. The aim of this paper is to unify these approaches. We will illustrate the strength of the unified method by showing that on any PP-stable family of unconstrained/bipartite/directed degree sequences the switch Markov chain is rapidly mixing. This is a common generalization of every known result that shows the rapid mixing nature of the switch Markov chain on a region of degree sequences. Two applications of this general result will be presented. One is an almost uniform sampler for power-law degree sequences with exponent γ>1+3\gamma>1+\sqrt{3}. The other one shows that the switch Markov chain on the degree sequence of an Erd\H{o}s-R\'enyi random graph G(n,p)G(n,p) is asymptotically almost surely rapidly mixing if pp is bounded away from 0 and 1 by at least 5lognn1\frac{5\log n}{n-1}.Comment: Clarification
    corecore