3,342 research outputs found

    Explicit expanders with cutoff phenomena

    Full text link
    The cutoff phenomenon describes a sharp transition in the convergence of an ergodic finite Markov chain to equilibrium. Of particular interest is understanding this convergence for the simple random walk on a bounded-degree expander graph. The first example of a family of bounded-degree graphs where the random walk exhibits cutoff in total-variation was provided only very recently, when the authors showed this for a typical random regular graph. However, no example was known for an explicit (deterministic) family of expanders with this phenomenon. Here we construct a family of cubic expanders where the random walk from a worst case initial position exhibits total-variation cutoff. Variants of this construction give cubic expanders without cutoff, as well as cubic graphs with cutoff at any prescribed time-point.Comment: 17 pages, 2 figure

    Cutoff for non-backtracking random walks on sparse random graphs

    Get PDF
    A finite ergodic Markov chain is said to exhibit cutoff if its distance to stationarity remains close to 1 over a certain number of iterations and then abruptly drops to near 0 on a much shorter time scale. Discovered in the context of card shuffling (Aldous-Diaconis, 1986), this phenomenon is now believed to be rather typical among fast mixing Markov chains. Yet, establishing it rigorously often requires a challengingly detailed understanding of the underlying chain. Here we consider non-backtracking random walks on random graphs with a given degree sequence. Under a general sparsity condition, we establish the cutoff phenomenon, determine its precise window, and prove that the (suitably rescaled) cutoff profile approaches a remarkably simple, universal shape

    Simple Monte Carlo and the Metropolis Algorithm

    Get PDF
    We study the integration of functions with respect to an unknown density. We compare the simple Monte Carlo method (which is almost optimal for a certain large class of inputs) and compare it with the Metropolis algorithm (based on a suitable ball walk). Using MCMC we prove (for certain classes of inputs) that adaptive methods are much better than nonadaptive ones. Actually, the curse of dimension (for nonadaptive methods) can be broken by adaption.Comment: Journal of Complexity, to appea

    Randomized Search of Graphs in Log Space and Probabilistic Computation

    Full text link
    Reingold has shown that L = SL, that s-t connectivity in a poly-mixing digraph is complete for promise-RL, and that s-t connectivity for a poly-mixing out-regular digraph with known stationary distribution is in L. Several properties that bound the mixing times of random walks on digraphs have been identified, including the digraph conductance and the digraph spectral expansion. However, rapidly mixing digraphs can still have exponential cover time, thus it is important to specifically identify structural properties of digraphs that effect cover times. We examine the complexity of random walks on a basic parameterized family of unbalanced digraphs called Strong Chains (which model weakly symmetric logspace computations), and a special family of Strong Chains called Harps. We show that the worst case hitting times of Strong Chain families vary smoothly with the number of asymmetric vertices and identify the necessary condition for non-polynomial cover time. This analysis also yields bounds on the cover times of general digraphs. Next we relate random walks on graphs to the random walks that arise in Monte Carlo methods applied to optimization problems. We introduce the notion of the asymmetric states of Markov chains and use this definition to obtain some results about Markov chains. We also obtain some results on the mixing times for Markov Chain Monte Carlo Methods. Finally, we consider the question of whether a single long random walk or many short walks is a better strategy for exploration. These are walks which reset to the start after a fixed number of steps. We exhibit digraph families for which a few short walks are far superior to a single long walk. We introduce an iterative deepening random search. We use this strategy estimate the cover time for poly-mixing subgraphs. Finally we discuss complexity theoretic implications and future work
    corecore