2,442 research outputs found
A Randomized Quasi-Monte Carlo Simulation Method for Markov Chains
We introduce and study a randomized quasi-Monte Carlo method for estimating the state distribution at each step of a Markov chain, under the assumption that the chain has a totally ordered (discrete or continuous) state space. The number of steps in the chain can be random and unbounded. The method simulates copies of the chain in parallel, using a -dimensional low-discrepancy point set of cardinality , randomized independently at each step, where is the number of uniform random numbers required at each transition of the Markov chain. This technique is effective in particular to obtain a low-variance unbiased estimator of the expected total cost up to some random stopping time, when state-dependent costs are paid at each step. We provide numerical illustrations where the variance reduction with respect to standard Monte Carlo is substantial. The variance is reduced by factors of several thousands in some cases. We prove bounds on the convergence rate of the worst-case error and variance for special situations. In line with what is typically observed in RQMC contexts, our empirical results indicate much better convergence than what these bounds guarantee
Variance Reduction with Array-RQMC for Tau-Leaping Simulation of Stochastic Biological and Chemical Reaction Networks
We explore the use of Array-RQMC, a randomized quasi-Monte Carlo method designed for the simulation of Markov chains, to reduce the variance when simulating stochastic biological or chemical reaction networks with -leaping. The task is to estimate the expectation of a function of molecule copy numbers at a given future time by the sample average over sample paths, and the goal is to reduce the variance of this sample-average estimator. We find that when the method is properly applied, variance reductions by factors in the thousands can be obtained. These factors are much larger than those observed previously by other authors who tried RQMC methods for the same examples. Array-RQMC simulates an array of realizations of the Markov chain and requires a sorting function to reorder these chains according to their states, after each step. The choice of sorting function is a key ingredient for the efficiency of the method, although in our experiments, Array-RQMC was never worse than ordinary Monte Carlo, regardless of the sorting method. The expected number of reactions of each type per step also has an impact on the efficiency gain.ERDF, ESF, EXP. 2019/00432, Canada Research Chair, IVADO Research Grant, NSERC Discovery Grant RGPIN-110050
Variance Reduction Techniques in Monte Carlo Methods
Monte Carlo methods are simulation algorithms to estimate a numerical quantity in a statistical model of a real system. These algorithms are executed by computer programs. Variance reduction techniques (VRT) are needed, even though computer speed has been increasing dramatically, ever since the introduction of computers. This increased computer power has stimulated simulation analysts to develop ever more realistic models, so that the net result has not been faster execution of simulation experiments; e.g., some modern simulation models need hours or days for a single ’run’ (one replication of one scenario or combination of simulation input values). Moreover there are some simulation models that represent rare events which have extremely small probabilities of occurrence), so even modern computer would take ’for ever’ (centuries) to execute a single run - were it not that special VRT can reduce theses excessively long runtimes to practical magnitudes.common random numbers;antithetic random numbers;importance sampling;control variates;conditioning;stratied sampling;splitting;quasi Monte Carlo
Consistency of Markov chain quasi-Monte Carlo on continuous state spaces
The random numbers driving Markov chain Monte Carlo (MCMC) simulation are
usually modeled as independent U(0,1) random variables. Tribble [Markov chain
Monte Carlo algorithms using completely uniformly distributed driving sequences
(2007) Stanford Univ.] reports substantial improvements when those random
numbers are replaced by carefully balanced inputs from completely uniformly
distributed sequences. The previous theoretical justification for using
anything other than i.i.d. U(0,1) points shows consistency for estimated means,
but only applies for discrete stationary distributions. We extend those results
to some MCMC algorithms for continuous stationary distributions. The main
motivation is the search for quasi-Monte Carlo versions of MCMC. As a side
benefit, the results also establish consistency for the usual method of using
pseudo-random numbers in place of random ones.Comment: Published in at http://dx.doi.org/10.1214/10-AOS831 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Construction of weakly CUD sequences for MCMC sampling
In Markov chain Monte Carlo (MCMC) sampling considerable thought goes into
constructing random transitions. But those transitions are almost always driven
by a simulated IID sequence. Recently it has been shown that replacing an IID
sequence by a weakly completely uniformly distributed (WCUD) sequence leads to
consistent estimation in finite state spaces. Unfortunately, few WCUD sequences
are known. This paper gives general methods for proving that a sequence is
WCUD, shows that some specific sequences are WCUD, and shows that certain
operations on WCUD sequences yield new WCUD sequences. A numerical example on a
42 dimensional continuous Gibbs sampler found that some WCUD inputs sequences
produced variance reductions ranging from tens to hundreds for posterior means
of the parameters, compared to IID inputs.Comment: Published in at http://dx.doi.org/10.1214/07-EJS162 the Electronic
Journal of Statistics (http://www.i-journals.org/ejs/) by the Institute of
Mathematical Statistics (http://www.imstat.org
- …