2,522 research outputs found
A piecewise deterministic scaling limit of Lifted Metropolis-Hastings in the Curie-Weiss model
In Turitsyn, Chertkov, Vucelja (2011) a non-reversible Markov Chain Monte
Carlo (MCMC) method on an augmented state space was introduced, here referred
to as Lifted Metropolis-Hastings (LMH). A scaling limit of the magnetization
process in the Curie-Weiss model is derived for LMH, as well as for
Metropolis-Hastings (MH). The required jump rate in the high (supercritical)
temperature regime equals for LMH, which should be compared to
for MH. At the critical temperature the required jump rate equals for
LMH and for MH, in agreement with experimental results of Turitsyn,
Chertkov, Vucelja (2011). The scaling limit of LMH turns out to be a
non-reversible piecewise deterministic exponentially ergodic `zig-zag' Markov
process
A large deviation principle for the empirical measures of Metropolis-Hastings chains
To sample from a given target distribution, Markov chain Monte Carlo (MCMC)
sampling relies on constructing an ergodic Markov chain with the target
distribution as its invariant measure. For any MCMC method, an important
question is how to evaluate its efficiency. One approach is to consider the
associated empirical measure and how fast it converges to the stationary
distribution of the underlying Markov process. Recently, this question has been
considered from the perspective of large deviation theory, for different types
of MCMC methods, including, e.g., non-reversible Metropolis-Hastings on a
finite state space, non-reversible Langevin samplers, the zig-zag sampler, and
parallell tempering. This approach, based on large deviations, has proven
successful in analysing existing methods and designing new, efficient ones.
However, for the Metropolis-Hastings algorithm on more general state spaces,
the workhorse of MCMC sampling, the same techniques have not been available for
analysing performance, as the underlying Markov chain dynamics violate the
conditions used to prove existing large deviation results for empirical
measures of a Markov chain. This also extends to methods built on the same idea
as Metropolis-Hastings, such as the Metropolis-Adjusted Langevin Method or
ABC-MCMC. In this paper, we take the first steps towards such a
large-deviations based analysis of Metropolis-Hastings-like methods, by proving
a large deviation principle for the the empirical measures of
Metropolis-Hastings chains. In addition, we characterize the rate function and
its properties in terms of the acceptance- and rejection-part of the
Metropolis-Hastings dynamics.Comment: 31 pages; updated assumptions, added references and acknowledgment
Spectral Bounds for Certain Two-Factor Non-Reversible MCMC Algorithms
Abstract We prove that the Markov operator corresponding to the two-variable, non-reversible Gibbs sampler has spectrum which is entirely real and non-negative, thus providing a first step towards the spectral analysis of MCMC algorithms in the non-reversible case. We also provide an extension to Metropolis-Hastings components, and connect the spectrum of an algorithm to the spectrum of its marginal chain
Metropolis Sampling
Monte Carlo (MC) sampling methods are widely applied in Bayesian inference,
system simulation and optimization problems. The Markov Chain Monte Carlo
(MCMC) algorithms are a well-known class of MC methods which generate a Markov
chain with the desired invariant distribution. In this document, we focus on
the Metropolis-Hastings (MH) sampler, which can be considered as the atom of
the MCMC techniques, introducing the basic notions and different properties. We
describe in details all the elements involved in the MH algorithm and the most
relevant variants. Several improvements and recent extensions proposed in the
literature are also briefly discussed, providing a quick but exhaustive
overview of the current Metropolis-based sampling's world.Comment: Wiley StatsRef-Statistics Reference Online, 201
Which ergodic averages have finite asymptotic variance?
We show that the class of functions for which ergodic averages of a
reversible Markov chain have finite asymptotic variance is determined by the
class of functions for which ergodic averages of its associated jump
chain have finite asymptotic variance. This allows us to characterize
completely which ergodic averages have finite asymptotic variance when the
Markov chain is an independence sampler. In addition, we obtain a simple
sufficient condition for all ergodic averages of functions of the primary
variable in a pseudo-marginal Markov chain to have finite asymptotic variance
- …