11,821 research outputs found

    Markov Chain Monte Carlo confidence intervals

    Full text link
    For a reversible and ergodic Markov chain {Xn,n≄0}\{X_n,n\geq0\} with invariant distribution π\pi, we show that a valid confidence interval for π(h)\pi(h) can be constructed whenever the asymptotic variance σP2(h)\sigma^2_P(h) is finite and positive. We do not impose any additional condition on the convergence rate of the Markov chain. The confidence interval is derived using the so-called fixed-b lag-window estimator of σP2(h)\sigma_P^2(h). We also derive a result that suggests that the proposed confidence interval procedure converges faster than classical confidence interval procedures based on the Gaussian distribution and standard central limit theorems for Markov chains.Comment: Published at http://dx.doi.org/10.3150/15-BEJ712 in the Bernoulli (http://isi.cbs.nl/bernoulli/) by the International Statistical Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm

    Relative fixed-width stopping rules for Markov chain Monte Carlo simulations

    Full text link
    Markov chain Monte Carlo (MCMC) simulations are commonly employed for estimating features of a target distribution, particularly for Bayesian inference. A fundamental challenge is determining when these simulations should stop. We consider a sequential stopping rule that terminates the simulation when the width of a confidence interval is sufficiently small relative to the size of the target parameter. Specifically, we propose relative magnitude and relative standard deviation stopping rules in the context of MCMC. In each setting, we develop sufficient conditions for asymptotic validity, that is conditions to ensure the simulation will terminate with probability one and the resulting confidence intervals will have the proper coverage probability. Our results are applicable in a wide variety of MCMC estimation settings, such as expectation, quantile, or simultaneous multivariate estimation. Finally, we investigate the finite sample properties through a variety of examples and provide some recommendations to practitioners.Comment: 24 page

    Markov Chain Monte Carlo: Can We Trust the Third Significant Figure?

    Full text link
    Current reporting of results based on Markov chain Monte Carlo computations could be improved. In particular, a measure of the accuracy of the resulting estimates is rarely reported. Thus we have little ability to objectively assess the quality of the reported estimates. We address this issue in that we discuss why Monte Carlo standard errors are important, how they can be easily calculated in Markov chain Monte Carlo and how they can be used to decide when to stop the simulation. We compare their use to a popular alternative in the context of two examples.Comment: Published in at http://dx.doi.org/10.1214/08-STS257 the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Fixed-width output analysis for Markov chain Monte Carlo

    Get PDF
    Markov chain Monte Carlo is a method of producing a correlated sample in order to estimate features of a target distribution via ergodic averages. A fundamental question is when should sampling stop? That is, when are the ergodic averages good estimates of the desired quantities? We consider a method that stops the simulation when the width of a confidence interval based on an ergodic average is less than a user-specified value. Hence calculating a Monte Carlo standard error is a critical step in assessing the simulation output. We consider the regenerative simulation and batch means methods of estimating the variance of the asymptotic normal distribution. We give sufficient conditions for the strong consistency of both methods and investigate their finite sample properties in a variety of examples

    Batch means and spectral variance estimators in Markov chain Monte Carlo

    Full text link
    Calculating a Monte Carlo standard error (MCSE) is an important step in the statistical analysis of the simulation output obtained from a Markov chain Monte Carlo experiment. An MCSE is usually based on an estimate of the variance of the asymptotic normal distribution. We consider spectral and batch means methods for estimating this variance. In particular, we establish conditions which guarantee that these estimators are strongly consistent as the simulation effort increases. In addition, for the batch means and overlapping batch means methods we establish conditions ensuring consistency in the mean-square sense which in turn allows us to calculate the optimal batch size up to a constant of proportionality. Finally, we examine the empirical finite-sample properties of spectral variance and batch means estimators and provide recommendations for practitioners.Comment: Published in at http://dx.doi.org/10.1214/09-AOS735 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Hastings-Metropolis algorithm on Markov chains for small-probability estimation

    Get PDF
    Shielding studies in neutron transport, with Monte Carlo codes, yield challenging problems of small-probability estimation. The particularity of these studies is that the small probability to estimate is formulated in terms of the distribution of a Markov chain, instead of that of a random vector in more classical cases. Thus, it is not straightforward to adapt classical statistical methods, for estimating small probabilities involving random vectors, to these neutron-transport problems. A recent interacting-particle method for small-probability estimation, relying on the Hastings-Metropolis algorithm, is presented. It is shown how to adapt the Hastings-Metropolis algorithm when dealing with Markov chains. A convergence result is also shown. Then, the practical implementation of the resulting method for small-probability estimation is treated in details, for a Monte Carlo shielding study. Finally, it is shown, for this study, that the proposed interacting-particle method considerably outperforms a simple-Monte Carlo method, when the probability to estimate is small.Comment: 33 page

    Regenerative Simulation for Queueing Networks with Exponential or Heavier Tail Arrival Distributions

    Full text link
    Multiclass open queueing networks find wide applications in communication, computer and fabrication networks. Often one is interested in steady-state performance measures associated with these networks. Conceptually, under mild conditions, a regenerative structure exists in multiclass networks, making them amenable to regenerative simulation for estimating the steady-state performance measures. However, typically, identification of a regenerative structure in these networks is difficult. A well known exception is when all the interarrival times are exponentially distributed, where the instants corresponding to customer arrivals to an empty network constitute a regenerative structure. In this paper, we consider networks where the interarrival times are generally distributed but have exponential or heavier tails. We show that these distributions can be decomposed into a mixture of sums of independent random variables such that at least one of the components is exponentially distributed. This allows an easily implementable embedded regenerative structure in the Markov process. We show that under mild conditions on the network primitives, the regenerative mean and standard deviation estimators are consistent and satisfy a joint central limit theorem useful for constructing asymptotically valid confidence intervals. We also show that amongst all such interarrival time decompositions, the one with the largest mean exponential component minimizes the asymptotic variance of the standard deviation estimator.Comment: A preliminary version of this paper will appear in Proceedings of Winter Simulation Conference, Washington, DC, 201
    • 

    corecore