4,925 research outputs found
Consistency of Markov chain quasi-Monte Carlo on continuous state spaces
The random numbers driving Markov chain Monte Carlo (MCMC) simulation are
usually modeled as independent U(0,1) random variables. Tribble [Markov chain
Monte Carlo algorithms using completely uniformly distributed driving sequences
(2007) Stanford Univ.] reports substantial improvements when those random
numbers are replaced by carefully balanced inputs from completely uniformly
distributed sequences. The previous theoretical justification for using
anything other than i.i.d. U(0,1) points shows consistency for estimated means,
but only applies for discrete stationary distributions. We extend those results
to some MCMC algorithms for continuous stationary distributions. The main
motivation is the search for quasi-Monte Carlo versions of MCMC. As a side
benefit, the results also establish consistency for the usual method of using
pseudo-random numbers in place of random ones.Comment: Published in at http://dx.doi.org/10.1214/10-AOS831 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Construction of weakly CUD sequences for MCMC sampling
In Markov chain Monte Carlo (MCMC) sampling considerable thought goes into
constructing random transitions. But those transitions are almost always driven
by a simulated IID sequence. Recently it has been shown that replacing an IID
sequence by a weakly completely uniformly distributed (WCUD) sequence leads to
consistent estimation in finite state spaces. Unfortunately, few WCUD sequences
are known. This paper gives general methods for proving that a sequence is
WCUD, shows that some specific sequences are WCUD, and shows that certain
operations on WCUD sequences yield new WCUD sequences. A numerical example on a
42 dimensional continuous Gibbs sampler found that some WCUD inputs sequences
produced variance reductions ranging from tens to hundreds for posterior means
of the parameters, compared to IID inputs.Comment: Published in at http://dx.doi.org/10.1214/07-EJS162 the Electronic
Journal of Statistics (http://www.i-journals.org/ejs/) by the Institute of
Mathematical Statistics (http://www.imstat.org
A generalization of short-period Tausworthe generators and its application to Markov chain quasi-Monte Carlo
A one-dimensional sequence is said to be
completely uniformly distributed (CUD) if overlapping -blocks , , are uniformly distributed
for every dimension . This concept naturally arises in Markov chain
quasi-Monte Carlo (QMC). However, the definition of CUD sequences is not
constructive, and thus there remains the problem of how to implement the Markov
chain QMC algorithm in practice. Harase (2021) focused on the -value, which
is a measure of uniformity widely used in the study of QMC, and implemented
short-period Tausworthe generators (i.e., linear feedback shift register
generators) over the two-element field that approximate CUD
sequences by running for the entire period. In this paper, we generalize a
search algorithm over to that over arbitrary finite fields
with elements and conduct a search for Tausworthe generators
over with -values zero (i.e., optimal) for dimension
and small for , especially in the case where , and . We
provide a parameter table of Tausworthe generators over , and
report a comparison between our new generators over and existing
generators over in numerical examples using Markov chain QMC
Langevin Quasi-Monte Carlo
Langevin Monte Carlo (LMC) and its stochastic gradient versions are powerful
algorithms for sampling from complex high-dimensional distributions. To sample
from a distribution with density , LMC
iteratively generates the next sample by taking a step in the gradient
direction with added Gaussian perturbations. Expectations w.r.t. the
target distribution are estimated by averaging over LMC samples. In
ordinary Monte Carlo, it is well known that the estimation error can be
substantially reduced by replacing independent random samples by quasi-random
samples like low-discrepancy sequences. In this work, we show that the
estimation error of LMC can also be reduced by using quasi-random samples.
Specifically, we propose to use completely uniformly distributed (CUD)
sequences with certain low-discrepancy property to generate the Gaussian
perturbations. Under smoothness and convexity conditions, we prove that LMC
with a low-discrepancy CUD sequence achieves smaller error than standard LMC.
The theoretical analysis is supported by compelling numerical experiments,
which demonstrate the effectiveness of our approach
Automated Dynamic Error Analysis Methods for Optimization of Computer Arithmetic Systems
Computer arithmetic is one of the more important topics within computer science and engineering. The earliest implementations of computer systems were designed to perform arithmetic operations and cost if not all digital systems will be required to perform some sort of arithmetic as part of their normal operations. This reliance on the arithmetic operations of computers means the accurate representation of real numbers within digital systems is vital, and an understanding of how these systems are implemented and their possible drawbacks is essential in order to design and implement modern high performance systems. At present the most widely implemented system for computer arithmetic is the IEEE754 Floating Point system, while this system is deemed to the be the best available implementation it has several features that can result in serious errors of computation if not implemented correctly. Lack of understanding of these errors and their effects has led to real world disasters in the past on several occasions. Systems for the detection of these errors are highly important and fast, efficient and easy to use implementations of these detection systems is a high priority. Detection of floating point rounding errors normally requires run-time analysis in order to be effective. Several systems have been proposed for the analysis of floating point arithmetic including Interval Arithmetic, Affine Arithmetic and Monte Carlo Arithmetic. While these systems have been well studied using theoretical and software based approaches, implementation of systems that can be applied to real world situations has been limited due to issues with implementation, performance and scalability. The majority of implementations have been software based and have not taken advantage of the performance gains associated with hardware accelerated computer arithmetic systems. This is especially problematic when it is considered that systems requiring high accuracy will often require high performance. The aim of this thesis and associated research is to increase understanding of error and error analysis methods through the development of easy to use and easy to understand implementations of these techniques
- β¦