795 research outputs found
Control Variates for Reversible MCMC Samplers
A general methodology is introduced for the construction and effective
application of control variates to estimation problems involving data from
reversible MCMC samplers. We propose the use of a specific class of functions
as control variates, and we introduce a new, consistent estimator for the
values of the coefficients of the optimal linear combination of these
functions. The form and proposed construction of the control variates is
derived from our solution of the Poisson equation associated with a specific
MCMC scenario. The new estimator, which can be applied to the same MCMC sample,
is derived from a novel, finite-dimensional, explicit representation for the
optimal coefficients. The resulting variance-reduction methodology is primarily
applicable when the simulated data are generated by a conjugate random-scan
Gibbs sampler. MCMC examples of Bayesian inference problems demonstrate that
the corresponding reduction in the estimation variance is significant, and that
in some cases it can be quite dramatic. Extensions of this methodology in
several directions are given, including certain families of Metropolis-Hastings
samplers and hybrid Metropolis-within-Gibbs algorithms. Corresponding
simulation examples are presented illustrating the utility of the proposed
methods. All methodological and asymptotic arguments are rigorously justified
under easily verifiable and essentially minimal conditions.Comment: 44 pages; 6 figures; 5 table
Density estimators through Zero Variance Markov Chain Monte Carlo
A Markov Chain Monte Carlo method is proposed for the pointwise evaluation of a density whose normalizing constant is not known. This method was introduced in the physics literature by Assaraf et al (2007). Conditions for unbiasedness of the estimator are derived. A central limit theorem is also proved under regularity conditions. The new idea is tested on some toy-examples.Density estimator, Fundamental solution, MCMC simulation
A Comparative Review of Dimension Reduction Methods in Approximate Bayesian Computation
Approximate Bayesian computation (ABC) methods make use of comparisons
between simulated and observed summary statistics to overcome the problem of
computationally intractable likelihood functions. As the practical
implementation of ABC requires computations based on vectors of summary
statistics, rather than full data sets, a central question is how to derive
low-dimensional summary statistics from the observed data with minimal loss of
information. In this article we provide a comprehensive review and comparison
of the performance of the principal methods of dimension reduction proposed in
the ABC literature. The methods are split into three nonmutually exclusive
classes consisting of best subset selection methods, projection techniques and
regularization. In addition, we introduce two new methods of dimension
reduction. The first is a best subset selection method based on Akaike and
Bayesian information criteria, and the second uses ridge regression as a
regularization procedure. We illustrate the performance of these dimension
reduction techniques through the analysis of three challenging models and data
sets.Comment: Published in at http://dx.doi.org/10.1214/12-STS406 the Statistical
Science (http://www.imstat.org/sts/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Sequential Quasi-Monte Carlo
We derive and study SQMC (Sequential Quasi-Monte Carlo), a class of
algorithms obtained by introducing QMC point sets in particle filtering. SQMC
is related to, and may be seen as an extension of, the array-RQMC algorithm of
L'Ecuyer et al. (2006). The complexity of SQMC is , where is
the number of simulations at each iteration, and its error rate is smaller than
the Monte Carlo rate . The only requirement to implement SQMC is
the ability to write the simulation of particle given as a
deterministic function of and a fixed number of uniform variates.
We show that SQMC is amenable to the same extensions as standard SMC, such as
forward smoothing, backward smoothing, unbiased likelihood evaluation, and so
on. In particular, SQMC may replace SMC within a PMCMC (particle Markov chain
Monte Carlo) algorithm. We establish several convergence results. We provide
numerical evidence that SQMC may significantly outperform SMC in practical
scenarios.Comment: 55 pages, 10 figures (final version
Markov Chain Monte Carlo Technology
In the past fifteen years computational statistics has been enriched by a powerful, somewhat abstract method of generating variates from a target probability distribution that is based on Markov chains whose stationary distribution is the probability distribution of interest. This class of methods, popularly referred to as Markov chain Monte Carlo methods, or simply MCMC methods, have been influential in the modern practice of Bayesian statistics where these methods are used to summarize the posterior distributions that arise in the context of the Bayesian prior-posterior analysis (Tanner and Wong, 1987; Gelfand and Smith, 1990; Smith and Roberts, 1993; Tierney, 1994; Besaget al., 1995; Chib and Greenberg, 1995, 1996; Gilks et al., 1996; Tanner, 1996; Gammerman, 1997; Robert and Casella, 1999; Carlin and Louis, 2000; Chen et al., 2000; Chib, 2001; Congdon, 2001; Liu, 2001; Robert, 2001; Gelman at al, 2003). MCMC methods have proved useful in practically all aspects of Bayesian inference, for example, in the context of prediction problems and in the computation of quantities, such as the marginal likelihood, that are used for comparing competing Bayesian models. --
- âŠ