337 research outputs found

    Explicit expanders with cutoff phenomena

    Full text link
    The cutoff phenomenon describes a sharp transition in the convergence of an ergodic finite Markov chain to equilibrium. Of particular interest is understanding this convergence for the simple random walk on a bounded-degree expander graph. The first example of a family of bounded-degree graphs where the random walk exhibits cutoff in total-variation was provided only very recently, when the authors showed this for a typical random regular graph. However, no example was known for an explicit (deterministic) family of expanders with this phenomenon. Here we construct a family of cubic expanders where the random walk from a worst case initial position exhibits total-variation cutoff. Variants of this construction give cubic expanders without cutoff, as well as cubic graphs with cutoff at any prescribed time-point.Comment: 17 pages, 2 figure

    A temporal Central Limit Theorem for real-valued cocycles over rotations

    Get PDF
    We consider deterministic random walks on the real line driven by irrational rotations, or equivalently, skew product extensions of a rotation by α\alpha where the skewing cocycle is a piecewise constant mean zero function with a jump by one at a point β\beta. When α\alpha is badly approximable and β\beta is badly approximable with respect to α\alpha, we prove a Temporal Central Limit theorem (in the terminology recently introduced by D.Dolgopyat and O.Sarig), namely we show that for any fixed initial point, the occupancy random variables, suitably rescaled, converge to a Gaussian random variable. This result generalizes and extends a theorem by J. Beck for the special case when α\alpha is quadratic irrational, β\beta is rational and the initial point is the origin, recently reproved and then generalized to cover any initial point using geometric renormalization arguments by Avila-Dolgopyat-Duryev-Sarig (Israel J., 2015) and Dolgopyat-Sarig (J. Stat. Physics, 2016). We also use renormalization, but in order to treat irrational values of β\beta, instead of geometric arguments, we use the renormalization associated to the continued fraction algorithm and dynamical Ostrowski expansions. This yields a suitable symbolic coding framework which allows us to reduce the main result to a CLT for non homogeneous Markov chains.Comment: a few typos corrected, 28 pages, 4 figure

    Consistency of Markov chain quasi-Monte Carlo on continuous state spaces

    Full text link
    The random numbers driving Markov chain Monte Carlo (MCMC) simulation are usually modeled as independent U(0,1) random variables. Tribble [Markov chain Monte Carlo algorithms using completely uniformly distributed driving sequences (2007) Stanford Univ.] reports substantial improvements when those random numbers are replaced by carefully balanced inputs from completely uniformly distributed sequences. The previous theoretical justification for using anything other than i.i.d. U(0,1) points shows consistency for estimated means, but only applies for discrete stationary distributions. We extend those results to some MCMC algorithms for continuous stationary distributions. The main motivation is the search for quasi-Monte Carlo versions of MCMC. As a side benefit, the results also establish consistency for the usual method of using pseudo-random numbers in place of random ones.Comment: Published in at http://dx.doi.org/10.1214/10-AOS831 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    How quickly can we sample a uniform domino tiling of the 2L x 2L square via Glauber dynamics?

    Full text link
    TThe prototypical problem we study here is the following. Given a 2L×2L2L\times 2L square, there are approximately exp(4KL2/π)\exp(4KL^2/\pi ) ways to tile it with dominos, i.e. with horizontal or vertical 2×12\times 1 rectangles, where K0.916K\approx 0.916 is Catalan's constant [Kasteleyn '61, Temperley-Fisher '61]. A conceptually simple (even if computationally not the most efficient) way of sampling uniformly one among so many tilings is to introduce a Markov Chain algorithm (Glauber dynamics) where, with rate 11, two adjacent horizontal dominos are flipped to vertical dominos, or vice-versa. The unique invariant measure is the uniform one and a classical question [Wilson 2004,Luby-Randall-Sinclair 2001] is to estimate the time TmixT_{mix} it takes to approach equilibrium (i.e. the running time of the algorithm). In [Luby-Randall-Sinclair 2001, Randall-Tetali 2000], fast mixin was proven: Tmix=O(LC)T_{mix}=O(L^C) for some finite CC. Here, we go much beyond and show that cL2TmixL2+o(1)c L^2\le T_{mix}\le L^{2+o(1)}. Our result applies to rather general domain shapes (not just the 2L×2L2L\times 2L square), provided that the typical height function associated to the tiling is macroscopically planar in the large LL limit, under the uniform measure (this is the case for instance for the Temperley-type boundary conditions considered in [Kenyon 2000]). Also, our method extends to some other types of tilings of the plane, for instance the tilings associated to dimer coverings of the hexagon or square-hexagon lattices.Comment: to appear on PTRF; 42 pages, 9 figures; v2: typos corrected, references adde

    Tight Bounds for Randomized Load Balancing on Arbitrary Network Topologies

    Full text link
    We consider the problem of balancing load items (tokens) in networks. Starting with an arbitrary load distribution, we allow nodes to exchange tokens with their neighbors in each round. The goal is to achieve a distribution where all nodes have nearly the same number of tokens. For the continuous case where tokens are arbitrarily divisible, most load balancing schemes correspond to Markov chains, whose convergence is fairly well-understood in terms of their spectral gap. However, in many applications, load items cannot be divided arbitrarily, and we need to deal with the discrete case where the load is composed of indivisible tokens. This discretization entails a non-linear behavior due to its rounding errors, which makes this analysis much harder than in the continuous case. We investigate several randomized protocols for different communication models in the discrete case. As our main result, we prove that for any regular network in the matching model, all nodes have the same load up to an additive constant in (asymptotically) the same number of rounds as required in the continuous case. This generalizes and tightens the previous best result, which only holds for expander graphs, and demonstrates that there is almost no difference between the discrete and continuous cases. Our results also provide a positive answer to the question of how well discrete load balancing can be approximated by (continuous) Markov chains, which has been posed by many researchers.Comment: 74 pages, 4 figure

    Fast Markov chain Monte Carlo sampling for sparse Bayesian inference in high-dimensional inverse problems using L1-type priors

    Full text link
    Sparsity has become a key concept for solving of high-dimensional inverse problems using variational regularization techniques. Recently, using similar sparsity-constraints in the Bayesian framework for inverse problems by encoding them in the prior distribution has attracted attention. Important questions about the relation between regularization theory and Bayesian inference still need to be addressed when using sparsity promoting inversion. A practical obstacle for these examinations is the lack of fast posterior sampling algorithms for sparse, high-dimensional Bayesian inversion: Accessing the full range of Bayesian inference methods requires being able to draw samples from the posterior probability distribution in a fast and efficient way. This is usually done using Markov chain Monte Carlo (MCMC) sampling algorithms. In this article, we develop and examine a new implementation of a single component Gibbs MCMC sampler for sparse priors relying on L1-norms. We demonstrate that the efficiency of our Gibbs sampler increases when the level of sparsity or the dimension of the unknowns is increased. This property is contrary to the properties of the most commonly applied Metropolis-Hastings (MH) sampling schemes: We demonstrate that the efficiency of MH schemes for L1-type priors dramatically decreases when the level of sparsity or the dimension of the unknowns is increased. Practically, Bayesian inversion for L1-type priors using MH samplers is not feasible at all. As this is commonly believed to be an intrinsic feature of MCMC sampling, the performance of our Gibbs sampler also challenges common beliefs about the applicability of sample based Bayesian inference.Comment: 33 pages, 14 figure
    corecore