6,385 research outputs found
Importance Tempering
Simulated tempering (ST) is an established Markov chain Monte Carlo (MCMC)
method for sampling from a multimodal density . Typically, ST
involves introducing an auxiliary variable taking values in a finite subset
of and indexing a set of tempered distributions, say . In this case, small values of encourage better
mixing, but samples from are only obtained when the joint chain for
reaches . However, the entire chain can be used to estimate
expectations under of functions of interest, provided that importance
sampling (IS) weights are calculated. Unfortunately this method, which we call
importance tempering (IT), can disappoint. This is partly because the most
immediately obvious implementation is na\"ive and can lead to high variance
estimators. We derive a new optimal method for combining multiple IS estimators
and prove that the resulting estimator has a highly desirable property related
to the notion of effective sample size. We briefly report on the success of the
optimal combination in two modelling scenarios requiring reversible-jump MCMC,
where the na\"ive approach fails.Comment: 16 pages, 2 tables, significantly shortened from version 4 in
response to referee comments, to appear in Statistics and Computin
Subsampling MCMC - An introduction for the survey statistician
The rapid development of computing power and efficient Markov Chain Monte
Carlo (MCMC) simulation algorithms have revolutionized Bayesian statistics,
making it a highly practical inference method in applied work. However, MCMC
algorithms tend to be computationally demanding, and are particularly slow for
large datasets. Data subsampling has recently been suggested as a way to make
MCMC methods scalable on massively large data, utilizing efficient sampling
schemes and estimators from the survey sampling literature. These developments
tend to be unknown by many survey statisticians who traditionally work with
non-Bayesian methods, and rarely use MCMC. Our article explains the idea of
data subsampling in MCMC by reviewing one strand of work, Subsampling MCMC, a
so called pseudo-marginal MCMC approach to speeding up MCMC through data
subsampling. The review is written for a survey statistician without previous
knowledge of MCMC methods since our aim is to motivate survey sampling experts
to contribute to the growing Subsampling MCMC literature.Comment: Accepted for publication in Sankhya A. Previous uploaded version
contained a bug in generating the figures and reference
Speeding Up MCMC by Delayed Acceptance and Data Subsampling
The complexity of the Metropolis-Hastings (MH) algorithm arises from the
requirement of a likelihood evaluation for the full data set in each iteration.
Payne and Mallick (2015) propose to speed up the algorithm by a delayed
acceptance approach where the acceptance decision proceeds in two stages. In
the first stage, an estimate of the likelihood based on a random subsample
determines if it is likely that the draw will be accepted and, if so, the
second stage uses the full data likelihood to decide upon final acceptance.
Evaluating the full data likelihood is thus avoided for draws that are unlikely
to be accepted. We propose a more precise likelihood estimator which
incorporates auxiliary information about the full data likelihood while only
operating on a sparse set of the data. We prove that the resulting delayed
acceptance MH is more efficient compared to that of Payne and Mallick (2015).
The caveat of this approach is that the full data set needs to be evaluated in
the second stage. We therefore propose to substitute this evaluation by an
estimate and construct a state-dependent approximation thereof to use in the
first stage. This results in an algorithm that (i) can use a smaller subsample
m by leveraging on recent advances in Pseudo-Marginal MH (PMMH) and (ii) is
provably within of the true posterior.Comment: Accepted for publication in Journal of Computational and Graphical
Statistic
Using parallel computation to improve Independent Metropolis--Hastings based estimation
In this paper, we consider the implications of the fact that parallel
raw-power can be exploited by a generic Metropolis--Hastings algorithm if the
proposed values are independent. In particular, we present improvements to the
independent Metropolis--Hastings algorithm that significantly decrease the
variance of any estimator derived from the MCMC output, for a null computing
cost since those improvements are based on a fixed number of target density
evaluations. Furthermore, the techniques developed in this paper do not
jeopardize the Markovian convergence properties of the algorithm, since they
are based on the Rao--Blackwell principles of Gelfand and Smith (1990), already
exploited in Casella and Robert (1996), Atchade and Perron (2005) and Douc and
Robert (2010). We illustrate those improvements both on a toy normal example
and on a classical probit regression model, but stress the fact that they are
applicable in any case where the independent Metropolis-Hastings is applicable.Comment: 19 pages, 8 figures, to appear in Journal of Computational and
Graphical Statistic
- …