209 research outputs found
The Discrepancy and Gain Coefficients of Scrambled Digital Nets
AbstractDigital sequences and nets are among the most popular kinds of low discrepancy sequences and sets and are often used for quasi-Monte Carlo quadrature rules. Several years ago Owen proposed a method of scrambling digital sequences and recently Faure and Tezuka have proposed another method. This article considers the discrepancy of digital nets under these scramblings. The first main result of this article is a formula for the discrepancy of a scrambled digital (λ, t, m, s)-net in base b with n=λbm points that requires only O(n) operations to evaluate. The second main result is exact formulas for the gain coefficients of a digital (t, m, s)-net in terms of its generator matrices. The gain coefficients, as defined by Owen, determine both the worst-case and random-case analyses of quadrature error
Local antithetic sampling with scrambled nets
We consider the problem of computing an approximation to the integral
. Monte Carlo (MC) sampling typically attains a root
mean squared error (RMSE) of from independent random function
evaluations. By contrast, quasi-Monte Carlo (QMC) sampling using carefully
equispaced evaluation points can attain the rate for
any and randomized QMC (RQMC) can attain the RMSE
, both under mild conditions on . Classical
variance reduction methods for MC can be adapted to QMC. Published results
combining QMC with importance sampling and with control variates have found
worthwhile improvements, but no change in the error rate. This paper extends
the classical variance reduction method of antithetic sampling and combines it
with RQMC. One such method is shown to bring a modest improvement in the RMSE
rate, attaining for any , for
smooth enough .Comment: Published in at http://dx.doi.org/10.1214/07-AOS548 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Higher order scrambled digital nets achieve the optimal rate of the root mean square error for smooth integrands
We study a random sampling technique to approximate integrals
by averaging the function
at some sampling points. We focus on cases where the integrand is smooth, which
is a problem which occurs in statistics. The convergence rate of the
approximation error depends on the smoothness of the function and the
sampling technique. For instance, Monte Carlo (MC) sampling yields a
convergence of the root mean square error (RMSE) of order (where
is the number of samples) for functions with finite variance. Randomized
QMC (RQMC), a combination of MC and quasi-Monte Carlo (QMC), achieves a RMSE of
order under the stronger assumption that the integrand
has bounded variation. A combination of RQMC with local antithetic sampling
achieves a convergence of the RMSE of order (where
is the dimension) for functions with mixed partial derivatives up to
order two. Additional smoothness of the integrand does not improve the rate of
convergence of these algorithms in general. On the other hand, it is known that
without additional smoothness of the integrand it is not possible to improve
the convergence rate. This paper introduces a new RQMC algorithm, for which we
prove that it achieves a convergence of the root mean square error (RMSE) of
order provided the integrand satisfies the strong
assumption that it has square integrable partial mixed derivatives up to order
in each variable. Known lower bounds on the RMSE show that this rate
of convergence cannot be improved in general for integrands with this
smoothness. We provide numerical examples for which the RMSE converges
approximately with order and , in accordance with the
theoretical upper bound.Comment: Published in at http://dx.doi.org/10.1214/11-AOS880 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Application of Sequential Quasi-Monte Carlo to Autonomous Positioning
Sequential Monte Carlo algorithms (also known as particle filters) are
popular methods to approximate filtering (and related) distributions of
state-space models. However, they converge at the slow rate, which
may be an issue in real-time data-intensive scenarios. We give a brief outline
of SQMC (Sequential Quasi-Monte Carlo), a variant of SMC based on
low-discrepancy point sets proposed by Gerber and Chopin (2015), which
converges at a faster rate, and we illustrate the greater performance of SQMC
on autonomous positioning problems.Comment: 5 pages, 4 figure
- …