5 research outputs found
Control Variates for Reversible MCMC Samplers
A general methodology is introduced for the construction and effective
application of control variates to estimation problems involving data from
reversible MCMC samplers. We propose the use of a specific class of functions
as control variates, and we introduce a new, consistent estimator for the
values of the coefficients of the optimal linear combination of these
functions. The form and proposed construction of the control variates is
derived from our solution of the Poisson equation associated with a specific
MCMC scenario. The new estimator, which can be applied to the same MCMC sample,
is derived from a novel, finite-dimensional, explicit representation for the
optimal coefficients. The resulting variance-reduction methodology is primarily
applicable when the simulated data are generated by a conjugate random-scan
Gibbs sampler. MCMC examples of Bayesian inference problems demonstrate that
the corresponding reduction in the estimation variance is significant, and that
in some cases it can be quite dramatic. Extensions of this methodology in
several directions are given, including certain families of Metropolis-Hastings
samplers and hybrid Metropolis-within-Gibbs algorithms. Corresponding
simulation examples are presented illustrating the utility of the proposed
methods. All methodological and asymptotic arguments are rigorously justified
under easily verifiable and essentially minimal conditions.Comment: 44 pages; 6 figures; 5 table
Robust adaptive importance sampling for normal random vectors
Adaptive Monte Carlo methods are very efficient techniques designed to tune
simulation estimators on-line. In this work, we present an alternative to
stochastic approximation to tune the optimal change of measure in the context
of importance sampling for normal random vectors. Unlike stochastic
approximation, which requires very fine tuning in practice, we propose to use
sample average approximation and deterministic optimization techniques to
devise a robust and fully automatic variance reduction methodology. The same
samples are used in the sample optimization of the importance sampling
parameter and in the Monte Carlo computation of the expectation of interest
with the optimal measure computed in the previous step. We prove that this
highly dependent Monte Carlo estimator is convergent and satisfies a central
limit theorem with the optimal limiting variance. Numerical experiments confirm
the performance of this estimator: in comparison with the crude Monte Carlo
method, the computation time needed to achieve a given precision is divided by
a factor between 3 and 15.Comment: Published in at http://dx.doi.org/10.1214/09-AAP595 the Annals of
Applied Probability (http://www.imstat.org/aap/) by the Institute of
Mathematical Statistics (http://www.imstat.org