58 research outputs found
An asymptotic variance of the self-intersections of random walks
We present a Darboux-Wiener type lemma and apply it to obtain an exact
asymptotic for the variance of the self-intersection of one and two-dimensional
random walks. As a corollary, we obtain a central limit theorem for random walk
in random scenery conjectured by Kesten and Spitzer in 1979
Variance of partial sums of stationary sequences
Let be a centred sequence of weakly stationary random
variables with spectral measure and partial sums . We
show that is regularly varying of index at
infinity, if and only if is regularly
varying of index at the origin ().Comment: Published in at http://dx.doi.org/10.1214/12-AOP772 the Annals of
Probability (http://www.imstat.org/aop/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Relative Complexity of Random Walks in Random Scenery in the absence of a weak invariance principle for the local times
We answer the question of Aaronson about the relative complexity of Random
Walks in Random Sceneries driven by either aperiodic two dimensional random
walks, two-dimensional Simple Random walk, or by aperiodic random walks in the
domain of attraction of the Cauchy distribution. A key step is proving that the
range of the random walk satisfies the F\"olner property almost surely.Comment: 19 page
Which ergodic averages have finite asymptotic variance?
We show that the class of functions for which ergodic averages of a
reversible Markov chain have finite asymptotic variance is determined by the
class of functions for which ergodic averages of its associated jump
chain have finite asymptotic variance. This allows us to characterize
completely which ergodic averages have finite asymptotic variance when the
Markov chain is an independence sampler. In addition, we obtain a simple
sufficient condition for all ergodic averages of functions of the primary
variable in a pseudo-marginal Markov chain to have finite asymptotic variance
Asymptotic variance of stationary reversible and normal Markov processes
We obtain necessary and sufficient conditions for the regular variation of
the variance of partial sums of functionals of discrete and continuous-time
stationary Markov processes with normal transition operators. We also construct
a class of Metropolis-Hastings algorithms which satisfy a central limit theorem
and invariance principle when the variance is not linear in
Exponential Ergodicity of the Bouncy Particle Sampler
Non-reversible Markov chain Monte Carlo schemes based on piecewise
deterministic Markov processes have been recently introduced in applied
probability, automatic control, physics and statistics. Although these
algorithms demonstrate experimentally good performance and are accordingly
increasingly used in a wide range of applications, geometric ergodicity results
for such schemes have only been established so far under very restrictive
assumptions. We give here verifiable conditions on the target distribution
under which the Bouncy Particle Sampler algorithm introduced in \cite{P_dW_12}
is geometrically ergodic. This holds whenever the target satisfies a curvature
condition and has tails decaying at least as fast as an exponential and at most
as fast as a Gaussian distribution. This allows us to provide a central limit
theorem for the associated ergodic averages. When the target has tails thinner
than a Gaussian distribution, we propose an original modification of this
scheme that is geometrically ergodic. For thick-tailed target distributions,
such as -distributions, we extend the idea pioneered in \cite{J_G_12} in a
random walk Metropolis context. We apply a change of variable to obtain a
transformed target satisfying the tail conditions for geometric ergodicity. By
sampling the transformed target using the Bouncy Particle Sampler and mapping
back the Markov process to the original parameterization, we obtain a
geometrically ergodic algorithm.Comment: 30 page
Efficient implementation of Markov chain Monte Carlo when using an unbiased likelihood estimator
When an unbiased estimator of the likelihood is used within a
Metropolis--Hastings chain, it is necessary to trade off the number of Monte
Carlo samples used to construct this estimator against the asymptotic variances
of averages computed under this chain. Many Monte Carlo samples will typically
result in Metropolis--Hastings averages with lower asymptotic variances than
the corresponding Metropolis--Hastings averages using fewer samples. However,
the computing time required to construct the likelihood estimator increases
with the number of Monte Carlo samples. Under the assumption that the
distribution of the additive noise introduced by the log-likelihood estimator
is Gaussian with variance inversely proportional to the number of Monte Carlo
samples and independent of the parameter value at which it is evaluated, we
provide guidelines on the number of samples to select. We demonstrate our
results by considering a stochastic volatility model applied to stock index
returns.Comment: 34 pages, 9 figures, 3 table
Non-Reversible Parallel Tempering: a Scalable Highly Parallel MCMC Scheme
Parallel tempering (PT) methods are a popular class of Markov chain Monte
Carlo schemes used to sample complex high-dimensional probability
distributions. They rely on a collection of interacting auxiliary chains
targeting tempered versions of the target distribution to improve the
exploration of the state-space. We provide here a new perspective on these
highly parallel algorithms and their tuning by identifying and formalizing a
sharp divide in the behaviour and performance of reversible versus
non-reversible PT schemes. We show theoretically and empirically that a class
of non-reversible PT methods dominates its reversible counterparts and identify
distinct scaling limits for the non-reversible and reversible schemes, the
former being a piecewise-deterministic Markov process and the latter a
diffusion. These results are exploited to identify the optimal annealing
schedule for non-reversible PT and to develop an iterative scheme approximating
this schedule. We provide a wide range of numerical examples supporting our
theoretical and methodological contributions. The proposed methodology is
applicable to sample from a distribution with a density with respect
to a reference distribution and compute the normalizing constant. A
typical use case is when is a prior distribution, a likelihood
function and the corresponding posterior.Comment: 74 pages, 30 figures. The method is implemented in an open source
probabilistic programming available at
https://github.com/UBC-Stat-ML/blangSD
- …