58 research outputs found

    An asymptotic variance of the self-intersections of random walks

    Full text link
    We present a Darboux-Wiener type lemma and apply it to obtain an exact asymptotic for the variance of the self-intersection of one and two-dimensional random walks. As a corollary, we obtain a central limit theorem for random walk in random scenery conjectured by Kesten and Spitzer in 1979

    Variance of partial sums of stationary sequences

    Full text link
    Let X1,X2,X_1,X_2,\ldots be a centred sequence of weakly stationary random variables with spectral measure FF and partial sums Sn=X1++XnS_n=X_1+\cdots+X_n. We show that var(Sn)\operatorname {var}(S_n) is regularly varying of index γ\gamma at infinity, if and only if G(x):=xxF(dx)G(x):=\int_{-x}^xF(\mathrm {d}x) is regularly varying of index 2γ2-\gamma at the origin (0<γ<20<\gamma<2).Comment: Published in at http://dx.doi.org/10.1214/12-AOP772 the Annals of Probability (http://www.imstat.org/aop/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Relative Complexity of Random Walks in Random Scenery in the absence of a weak invariance principle for the local times

    Full text link
    We answer the question of Aaronson about the relative complexity of Random Walks in Random Sceneries driven by either aperiodic two dimensional random walks, two-dimensional Simple Random walk, or by aperiodic random walks in the domain of attraction of the Cauchy distribution. A key step is proving that the range of the random walk satisfies the F\"olner property almost surely.Comment: 19 page

    Which ergodic averages have finite asymptotic variance?

    Get PDF
    We show that the class of L2L^2 functions for which ergodic averages of a reversible Markov chain have finite asymptotic variance is determined by the class of L2L^2 functions for which ergodic averages of its associated jump chain have finite asymptotic variance. This allows us to characterize completely which ergodic averages have finite asymptotic variance when the Markov chain is an independence sampler. In addition, we obtain a simple sufficient condition for all ergodic averages of L2L^2 functions of the primary variable in a pseudo-marginal Markov chain to have finite asymptotic variance

    Asymptotic variance of stationary reversible and normal Markov processes

    Full text link
    We obtain necessary and sufficient conditions for the regular variation of the variance of partial sums of functionals of discrete and continuous-time stationary Markov processes with normal transition operators. We also construct a class of Metropolis-Hastings algorithms which satisfy a central limit theorem and invariance principle when the variance is not linear in nn

    Exponential Ergodicity of the Bouncy Particle Sampler

    Full text link
    Non-reversible Markov chain Monte Carlo schemes based on piecewise deterministic Markov processes have been recently introduced in applied probability, automatic control, physics and statistics. Although these algorithms demonstrate experimentally good performance and are accordingly increasingly used in a wide range of applications, geometric ergodicity results for such schemes have only been established so far under very restrictive assumptions. We give here verifiable conditions on the target distribution under which the Bouncy Particle Sampler algorithm introduced in \cite{P_dW_12} is geometrically ergodic. This holds whenever the target satisfies a curvature condition and has tails decaying at least as fast as an exponential and at most as fast as a Gaussian distribution. This allows us to provide a central limit theorem for the associated ergodic averages. When the target has tails thinner than a Gaussian distribution, we propose an original modification of this scheme that is geometrically ergodic. For thick-tailed target distributions, such as tt-distributions, we extend the idea pioneered in \cite{J_G_12} in a random walk Metropolis context. We apply a change of variable to obtain a transformed target satisfying the tail conditions for geometric ergodicity. By sampling the transformed target using the Bouncy Particle Sampler and mapping back the Markov process to the original parameterization, we obtain a geometrically ergodic algorithm.Comment: 30 page

    Efficient implementation of Markov chain Monte Carlo when using an unbiased likelihood estimator

    Get PDF
    When an unbiased estimator of the likelihood is used within a Metropolis--Hastings chain, it is necessary to trade off the number of Monte Carlo samples used to construct this estimator against the asymptotic variances of averages computed under this chain. Many Monte Carlo samples will typically result in Metropolis--Hastings averages with lower asymptotic variances than the corresponding Metropolis--Hastings averages using fewer samples. However, the computing time required to construct the likelihood estimator increases with the number of Monte Carlo samples. Under the assumption that the distribution of the additive noise introduced by the log-likelihood estimator is Gaussian with variance inversely proportional to the number of Monte Carlo samples and independent of the parameter value at which it is evaluated, we provide guidelines on the number of samples to select. We demonstrate our results by considering a stochastic volatility model applied to stock index returns.Comment: 34 pages, 9 figures, 3 table

    Non-Reversible Parallel Tempering: a Scalable Highly Parallel MCMC Scheme

    Full text link
    Parallel tempering (PT) methods are a popular class of Markov chain Monte Carlo schemes used to sample complex high-dimensional probability distributions. They rely on a collection of NN interacting auxiliary chains targeting tempered versions of the target distribution to improve the exploration of the state-space. We provide here a new perspective on these highly parallel algorithms and their tuning by identifying and formalizing a sharp divide in the behaviour and performance of reversible versus non-reversible PT schemes. We show theoretically and empirically that a class of non-reversible PT methods dominates its reversible counterparts and identify distinct scaling limits for the non-reversible and reversible schemes, the former being a piecewise-deterministic Markov process and the latter a diffusion. These results are exploited to identify the optimal annealing schedule for non-reversible PT and to develop an iterative scheme approximating this schedule. We provide a wide range of numerical examples supporting our theoretical and methodological contributions. The proposed methodology is applicable to sample from a distribution π\pi with a density LL with respect to a reference distribution π0\pi_0 and compute the normalizing constant. A typical use case is when π0\pi_0 is a prior distribution, LL a likelihood function and π\pi the corresponding posterior.Comment: 74 pages, 30 figures. The method is implemented in an open source probabilistic programming available at https://github.com/UBC-Stat-ML/blangSD
    corecore