8,492 research outputs found
The inverse of the star-discrepancy problem and the generation of pseudo-random numbers
The inverse of the star-discrepancy problem asks for point sets of
size in the -dimensional unit cube whose star-discrepancy
satisfies where
is a constant independent of and . The first existence results in this
direction were shown by Heinrich, Novak, Wasilkowski, and Wo\'{z}niakowski in
2001, and a number of improvements have been shown since then. Until now only
proofs that such point sets exist are known. Since such point sets would be
useful in applications, the big open problem is to find explicit constructions
of suitable point sets .
We review the current state of the art on this problem and point out some
connections to pseudo-random number generators
Optimal Jittered Sampling for two Points in the Unit Square
Jittered Sampling is a refinement of the classical Monte Carlo sampling
method. Instead of picking points randomly from , one partitions
the unit square into regions of equal measure and then chooses a point
randomly from each partition. Currently, no good rules for how to partition the
space are available. In this paper, we present a solution for the special case
of subdividing the unit square by a decreasing function into two regions so as
to minimize the expected squared discrepancy. The optimal
partitions are given by a \textit{highly} nonlinear integral equation for which
we determine an approximate solution. In particular, there is a break of
symmetry and the optimal partition is not into two sets of equal measure. We
hope this stimulates further interest in the construction of good partitions
Consistency of Markov chain quasi-Monte Carlo on continuous state spaces
The random numbers driving Markov chain Monte Carlo (MCMC) simulation are
usually modeled as independent U(0,1) random variables. Tribble [Markov chain
Monte Carlo algorithms using completely uniformly distributed driving sequences
(2007) Stanford Univ.] reports substantial improvements when those random
numbers are replaced by carefully balanced inputs from completely uniformly
distributed sequences. The previous theoretical justification for using
anything other than i.i.d. U(0,1) points shows consistency for estimated means,
but only applies for discrete stationary distributions. We extend those results
to some MCMC algorithms for continuous stationary distributions. The main
motivation is the search for quasi-Monte Carlo versions of MCMC. As a side
benefit, the results also establish consistency for the usual method of using
pseudo-random numbers in place of random ones.Comment: Published in at http://dx.doi.org/10.1214/10-AOS831 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Construction of weakly CUD sequences for MCMC sampling
In Markov chain Monte Carlo (MCMC) sampling considerable thought goes into
constructing random transitions. But those transitions are almost always driven
by a simulated IID sequence. Recently it has been shown that replacing an IID
sequence by a weakly completely uniformly distributed (WCUD) sequence leads to
consistent estimation in finite state spaces. Unfortunately, few WCUD sequences
are known. This paper gives general methods for proving that a sequence is
WCUD, shows that some specific sequences are WCUD, and shows that certain
operations on WCUD sequences yield new WCUD sequences. A numerical example on a
42 dimensional continuous Gibbs sampler found that some WCUD inputs sequences
produced variance reductions ranging from tens to hundreds for posterior means
of the parameters, compared to IID inputs.Comment: Published in at http://dx.doi.org/10.1214/07-EJS162 the Electronic
Journal of Statistics (http://www.i-journals.org/ejs/) by the Institute of
Mathematical Statistics (http://www.imstat.org
- β¦