9,720 research outputs found
Efficient shape-constrained inference for the autocovariance sequence from a reversible Markov chain
In this paper, we study the problem of estimating the autocovariance sequence
resulting from a reversible Markov chain. A motivating application for studying
this problem is the estimation of the asymptotic variance in central limit
theorems for Markov chains. We propose a novel shape-constrained estimator of
the autocovariance sequence, which is based on the key observation that the
representability of the autocovariance sequence as a moment sequence imposes
certain shape constraints. We examine the theoretical properties of the
proposed estimator and provide strong consistency guarantees for our estimator.
In particular, for geometrically ergodic reversible Markov chains, we show that
our estimator is strongly consistent for the true autocovariance sequence with
respect to an distance, and that our estimator leads to strongly
consistent estimates of the asymptotic variance. Finally, we perform empirical
studies to illustrate the theoretical properties of the proposed estimator as
well as to demonstrate the effectiveness of our estimator in comparison with
other current state-of-the-art methods for Markov chain Monte Carlo variance
estimation, including batch means, spectral variance estimators, and the
initial convex sequence estimator
Relative fixed-width stopping rules for Markov chain Monte Carlo simulations
Markov chain Monte Carlo (MCMC) simulations are commonly employed for
estimating features of a target distribution, particularly for Bayesian
inference. A fundamental challenge is determining when these simulations should
stop. We consider a sequential stopping rule that terminates the simulation
when the width of a confidence interval is sufficiently small relative to the
size of the target parameter. Specifically, we propose relative magnitude and
relative standard deviation stopping rules in the context of MCMC. In each
setting, we develop sufficient conditions for asymptotic validity, that is
conditions to ensure the simulation will terminate with probability one and the
resulting confidence intervals will have the proper coverage probability. Our
results are applicable in a wide variety of MCMC estimation settings, such as
expectation, quantile, or simultaneous multivariate estimation. Finally, we
investigate the finite sample properties through a variety of examples and
provide some recommendations to practitioners.Comment: 24 page
Control Variates for Reversible MCMC Samplers
A general methodology is introduced for the construction and effective
application of control variates to estimation problems involving data from
reversible MCMC samplers. We propose the use of a specific class of functions
as control variates, and we introduce a new, consistent estimator for the
values of the coefficients of the optimal linear combination of these
functions. The form and proposed construction of the control variates is
derived from our solution of the Poisson equation associated with a specific
MCMC scenario. The new estimator, which can be applied to the same MCMC sample,
is derived from a novel, finite-dimensional, explicit representation for the
optimal coefficients. The resulting variance-reduction methodology is primarily
applicable when the simulated data are generated by a conjugate random-scan
Gibbs sampler. MCMC examples of Bayesian inference problems demonstrate that
the corresponding reduction in the estimation variance is significant, and that
in some cases it can be quite dramatic. Extensions of this methodology in
several directions are given, including certain families of Metropolis-Hastings
samplers and hybrid Metropolis-within-Gibbs algorithms. Corresponding
simulation examples are presented illustrating the utility of the proposed
methods. All methodological and asymptotic arguments are rigorously justified
under easily verifiable and essentially minimal conditions.Comment: 44 pages; 6 figures; 5 table
Batch means and spectral variance estimators in Markov chain Monte Carlo
Calculating a Monte Carlo standard error (MCSE) is an important step in the
statistical analysis of the simulation output obtained from a Markov chain
Monte Carlo experiment. An MCSE is usually based on an estimate of the variance
of the asymptotic normal distribution. We consider spectral and batch means
methods for estimating this variance. In particular, we establish conditions
which guarantee that these estimators are strongly consistent as the simulation
effort increases. In addition, for the batch means and overlapping batch means
methods we establish conditions ensuring consistency in the mean-square sense
which in turn allows us to calculate the optimal batch size up to a constant of
proportionality. Finally, we examine the empirical finite-sample properties of
spectral variance and batch means estimators and provide recommendations for
practitioners.Comment: Published in at http://dx.doi.org/10.1214/09-AOS735 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Fixed-width output analysis for Markov chain Monte Carlo
Markov chain Monte Carlo is a method of producing a correlated sample in
order to estimate features of a target distribution via ergodic averages. A
fundamental question is when should sampling stop? That is, when are the
ergodic averages good estimates of the desired quantities? We consider a method
that stops the simulation when the width of a confidence interval based on an
ergodic average is less than a user-specified value. Hence calculating a Monte
Carlo standard error is a critical step in assessing the simulation output. We
consider the regenerative simulation and batch means methods of estimating the
variance of the asymptotic normal distribution. We give sufficient conditions
for the strong consistency of both methods and investigate their finite sample
properties in a variety of examples
Techniques for the Fast Simulation of Models of Highly dependable Systems
With the ever-increasing complexity and requirements of highly dependable systems, their evaluation during design and operation is becoming more crucial. Realistic models of such systems are often not amenable to analysis using conventional analytic or numerical methods. Therefore, analysts and designers turn to simulation to evaluate these models. However, accurate estimation of dependability measures of these models requires that the simulation frequently observes system failures, which are rare events in highly dependable systems. This renders ordinary Simulation impractical for evaluating such systems. To overcome this problem, simulation techniques based on importance sampling have been developed, and are very effective in certain settings. When importance sampling works well, simulation run lengths can be reduced by several orders of magnitude when estimating transient as well as steady-state dependability measures. This paper reviews some of the importance-sampling techniques that have been developed in recent years to estimate dependability measures efficiently in Markov and nonMarkov models of highly dependable system
Markov Chain Monte Carlo: Can We Trust the Third Significant Figure?
Current reporting of results based on Markov chain Monte Carlo computations
could be improved. In particular, a measure of the accuracy of the resulting
estimates is rarely reported. Thus we have little ability to objectively assess
the quality of the reported estimates. We address this issue in that we discuss
why Monte Carlo standard errors are important, how they can be easily
calculated in Markov chain Monte Carlo and how they can be used to decide when
to stop the simulation. We compare their use to a popular alternative in the
context of two examples.Comment: Published in at http://dx.doi.org/10.1214/08-STS257 the Statistical
Science (http://www.imstat.org/sts/) by the Institute of Mathematical
Statistics (http://www.imstat.org
- …