911 research outputs found
Variance bounding and geometric ergodicity of Markov chain Monte Carlo kernels for approximate Bayesian computation
Approximate Bayesian computation has emerged as a standard computational tool
when dealing with the increasingly common scenario of completely intractable
likelihood functions in Bayesian inference. We show that many common Markov
chain Monte Carlo kernels used to facilitate inference in this setting can fail
to be variance bounding, and hence geometrically ergodic, which can have
consequences for the reliability of estimates in practice. This phenomenon is
typically independent of the choice of tolerance in the approximation. We then
prove that a recently introduced Markov kernel in this setting can inherit
variance bounding and geometric ergodicity from its intractable
Metropolis--Hastings counterpart, under reasonably weak and manageable
conditions. We show that the computational cost of this alternative kernel is
bounded whenever the prior is proper, and present indicative results on an
example where spectral gaps and asymptotic variances can be computed, as well
as an example involving inference for a partially and discretely observed,
time-homogeneous, pure jump Markov process. We also supply two general
theorems, one of which provides a simple sufficient condition for lack of
variance bounding for reversible kernels and the other provides a positive
result concerning inheritance of variance bounding and geometric ergodicity for
mixtures of reversible kernels.Comment: 26 pages, 10 figure
Metropolis Sampling
Monte Carlo (MC) sampling methods are widely applied in Bayesian inference,
system simulation and optimization problems. The Markov Chain Monte Carlo
(MCMC) algorithms are a well-known class of MC methods which generate a Markov
chain with the desired invariant distribution. In this document, we focus on
the Metropolis-Hastings (MH) sampler, which can be considered as the atom of
the MCMC techniques, introducing the basic notions and different properties. We
describe in details all the elements involved in the MH algorithm and the most
relevant variants. Several improvements and recent extensions proposed in the
literature are also briefly discussed, providing a quick but exhaustive
overview of the current Metropolis-based sampling's world.Comment: Wiley StatsRef-Statistics Reference Online, 201
Geometric ergodicity of the Random Walk Metropolis with position-dependent proposal covariance
We consider a Metropolis-Hastings method with proposal kernel
, where is the current state. After discussing
specific cases from the literature, we analyse the ergodicity properties of the
resulting Markov chains. In one dimension we find that suitable choice of
can change the ergodicity properties compared to the Random Walk
Metropolis case , either for the better or worse. In
higher dimensions we use a specific example to show that judicious choice of
can produce a chain which will converge at a geometric rate to its
limiting distribution when probability concentrates on an ever narrower ridge
as grows, something which is not true for the Random Walk Metropolis.Comment: 15 pages + appendices, 4 figure
Adaptive Gibbs samplers
We consider various versions of adaptive Gibbs and Metropolis-
within-Gibbs samplers, which update their selection probabilities (and perhaps also their proposal distributions) on the
fly during a run, by learning
as they go in an attempt to optimise the algorithm. We present a cautionary
example of how even a simple-seeming adaptive Gibbs sampler may fail to
converge. We then present various positive results guaranteeing convergence
of adaptive Gibbs samplers under certain conditions
Stability of Noisy Metropolis-Hastings
Pseudo-marginal Markov chain Monte Carlo methods for sampling from
intractable distributions have gained recent interest and have been
theoretically studied in considerable depth. Their main appeal is that they are
exact, in the sense that they target marginally the correct invariant
distribution. However, the pseudo-marginal Markov chain can exhibit poor mixing
and slow convergence towards its target. As an alternative, a subtly different
Markov chain can be simulated, where better mixing is possible but the
exactness property is sacrificed. This is the noisy algorithm, initially
conceptualised as Monte Carlo within Metropolis (MCWM), which has also been
studied but to a lesser extent. The present article provides a further
characterisation of the noisy algorithm, with a focus on fundamental stability
properties like positive recurrence and geometric ergodicity. Sufficient
conditions for inheriting geometric ergodicity from a standard
Metropolis-Hastings chain are given, as well as convergence of the invariant
distribution towards the true target distribution
- …