2,810 research outputs found
Adaptive independent Metropolis--Hastings
We propose an adaptive independent Metropolis--Hastings algorithm with the
ability to learn from all previous proposals in the chain except the current
location. It is an extension of the independent Metropolis--Hastings algorithm.
Convergence is proved provided a strong Doeblin condition is satisfied, which
essentially requires that all the proposal functions have uniformly heavier
tails than the stationary distribution. The proof also holds if proposals
depending on the current state are used intermittently, provided the
information from these iterations is not used for adaption. The algorithm gives
samples from the exact distribution within a finite number of iterations with
probability arbitrarily close to 1. The algorithm is particularly useful when a
large number of samples from the same distribution is necessary, like in
Bayesian estimation, and in CPU intensive applications like, for example, in
inverse problems and optimization.Comment: Published in at http://dx.doi.org/10.1214/08-AAP545 the Annals of
Applied Probability (http://www.imstat.org/aap/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Adaptive Incremental Mixture Markov chain Monte Carlo
We propose Adaptive Incremental Mixture Markov chain Monte Carlo (AIMM), a
novel approach to sample from challenging probability distributions defined on
a general state-space. While adaptive MCMC methods usually update a parametric
proposal kernel with a global rule, AIMM locally adapts a semiparametric
kernel. AIMM is based on an independent Metropolis-Hastings proposal
distribution which takes the form of a finite mixture of Gaussian
distributions. Central to this approach is the idea that the proposal
distribution adapts to the target by locally adding a mixture component when
the discrepancy between the proposal mixture and the target is deemed to be too
large. As a result, the number of components in the mixture proposal is not
fixed in advance. Theoretically, we prove that there exists a process that can
be made arbitrarily close to AIMM and that converges to the correct target
distribution. We also illustrate that it performs well in practice in a variety
of challenging situations, including high-dimensional and multimodal target
distributions
Ensemble Transport Adaptive Importance Sampling
Markov chain Monte Carlo methods are a powerful and commonly used family of
numerical methods for sampling from complex probability distributions. As
applications of these methods increase in size and complexity, the need for
efficient methods increases. In this paper, we present a particle ensemble
algorithm. At each iteration, an importance sampling proposal distribution is
formed using an ensemble of particles. A stratified sample is taken from this
distribution and weighted under the posterior, a state-of-the-art ensemble
transport resampling method is then used to create an evenly weighted sample
ready for the next iteration. We demonstrate that this ensemble transport
adaptive importance sampling (ETAIS) method outperforms MCMC methods with
equivalent proposal distributions for low dimensional problems, and in fact
shows better than linear improvements in convergence rates with respect to the
number of ensemble members. We also introduce a new resampling strategy,
multinomial transformation (MT), which while not as accurate as the ensemble
transport resampler, is substantially less costly for large ensemble sizes, and
can then be used in conjunction with ETAIS for complex problems. We also focus
on how algorithmic parameters regarding the mixture proposal can be quickly
tuned to optimise performance. In particular, we demonstrate this methodology's
superior sampling for multimodal problems, such as those arising from inference
for mixture models, and for problems with expensive likelihoods requiring the
solution of a differential equation, for which speed-ups of orders of magnitude
are demonstrated. Likelihood evaluations of the ensemble could be computed in a
distributed manner, suggesting that this methodology is a good candidate for
parallel Bayesian computations
Improved Adaptive Rejection Metropolis Sampling Algorithms
Markov Chain Monte Carlo (MCMC) methods, such as the Metropolis-Hastings (MH)
algorithm, are widely used for Bayesian inference. One of the most important
issues for any MCMC method is the convergence of the Markov chain, which
depends crucially on a suitable choice of the proposal density. Adaptive
Rejection Metropolis Sampling (ARMS) is a well-known MH scheme that generates
samples from one-dimensional target densities making use of adaptive piecewise
proposals constructed using support points taken from rejected samples. In this
work we pinpoint a crucial drawback in the adaptive procedure in ARMS: support
points might never be added inside regions where the proposal is below the
target. When this happens in many regions it leads to a poor performance of
ARMS, with the proposal never converging to the target. In order to overcome
this limitation we propose two improved adaptive schemes for constructing the
proposal. The first one is a direct modification of the ARMS procedure that
incorporates support points inside regions where the proposal is below the
target, while satisfying the diminishing adaptation property, one of the
required conditions to assure the convergence of the Markov chain. The second
one is an adaptive independent MH algorithm with the ability to learn from all
previous samples except for the current state of the chain, thus also
guaranteeing the convergence to the invariant density. These two new schemes
improve the adaptive strategy of ARMS, thus simplifying the complexity in the
construction of the proposals. Numerical results show that the new techniques
provide better performance w.r.t. the standard ARMS.Comment: Matlab code provided in http://a2rms.sourceforge.net
- …