209 research outputs found

    The Discrepancy and Gain Coefficients of Scrambled Digital Nets

    Get PDF
    AbstractDigital sequences and nets are among the most popular kinds of low discrepancy sequences and sets and are often used for quasi-Monte Carlo quadrature rules. Several years ago Owen proposed a method of scrambling digital sequences and recently Faure and Tezuka have proposed another method. This article considers the discrepancy of digital nets under these scramblings. The first main result of this article is a formula for the discrepancy of a scrambled digital (λ, t, m, s)-net in base b with n=λbm points that requires only O(n) operations to evaluate. The second main result is exact formulas for the gain coefficients of a digital (t, m, s)-net in terms of its generator matrices. The gain coefficients, as defined by Owen, determine both the worst-case and random-case analyses of quadrature error

    Local antithetic sampling with scrambled nets

    Full text link
    We consider the problem of computing an approximation to the integral I=∫[0,1]df(x)dxI=\int_{[0,1]^d}f(x) dx. Monte Carlo (MC) sampling typically attains a root mean squared error (RMSE) of O(n−1/2)O(n^{-1/2}) from nn independent random function evaluations. By contrast, quasi-Monte Carlo (QMC) sampling using carefully equispaced evaluation points can attain the rate O(n−1+ε)O(n^{-1+\varepsilon}) for any ε>0\varepsilon>0 and randomized QMC (RQMC) can attain the RMSE O(n−3/2+ε)O(n^{-3/2+\varepsilon}), both under mild conditions on ff. Classical variance reduction methods for MC can be adapted to QMC. Published results combining QMC with importance sampling and with control variates have found worthwhile improvements, but no change in the error rate. This paper extends the classical variance reduction method of antithetic sampling and combines it with RQMC. One such method is shown to bring a modest improvement in the RMSE rate, attaining O(n−3/2−1/d+ε)O(n^{-3/2-1/d+\varepsilon}) for any ε>0\varepsilon>0, for smooth enough ff.Comment: Published in at http://dx.doi.org/10.1214/07-AOS548 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Higher order scrambled digital nets achieve the optimal rate of the root mean square error for smooth integrands

    Full text link
    We study a random sampling technique to approximate integrals ∫[0,1]sf(x) dx\int_{[0,1]^s}f(\mathbf{x})\,\mathrm{d}\mathbf{x} by averaging the function at some sampling points. We focus on cases where the integrand is smooth, which is a problem which occurs in statistics. The convergence rate of the approximation error depends on the smoothness of the function ff and the sampling technique. For instance, Monte Carlo (MC) sampling yields a convergence of the root mean square error (RMSE) of order N−1/2N^{-1/2} (where NN is the number of samples) for functions ff with finite variance. Randomized QMC (RQMC), a combination of MC and quasi-Monte Carlo (QMC), achieves a RMSE of order N−3/2+εN^{-3/2+\varepsilon} under the stronger assumption that the integrand has bounded variation. A combination of RQMC with local antithetic sampling achieves a convergence of the RMSE of order N−3/2−1/s+εN^{-3/2-1/s+\varepsilon} (where s≥1s\ge1 is the dimension) for functions with mixed partial derivatives up to order two. Additional smoothness of the integrand does not improve the rate of convergence of these algorithms in general. On the other hand, it is known that without additional smoothness of the integrand it is not possible to improve the convergence rate. This paper introduces a new RQMC algorithm, for which we prove that it achieves a convergence of the root mean square error (RMSE) of order N−α−1/2+εN^{-\alpha-1/2+\varepsilon} provided the integrand satisfies the strong assumption that it has square integrable partial mixed derivatives up to order α>1\alpha>1 in each variable. Known lower bounds on the RMSE show that this rate of convergence cannot be improved in general for integrands with this smoothness. We provide numerical examples for which the RMSE converges approximately with order N−5/2N^{-5/2} and N−7/2N^{-7/2}, in accordance with the theoretical upper bound.Comment: Published in at http://dx.doi.org/10.1214/11-AOS880 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Application of Sequential Quasi-Monte Carlo to Autonomous Positioning

    Full text link
    Sequential Monte Carlo algorithms (also known as particle filters) are popular methods to approximate filtering (and related) distributions of state-space models. However, they converge at the slow 1/N1/\sqrt{N} rate, which may be an issue in real-time data-intensive scenarios. We give a brief outline of SQMC (Sequential Quasi-Monte Carlo), a variant of SMC based on low-discrepancy point sets proposed by Gerber and Chopin (2015), which converges at a faster rate, and we illustrate the greater performance of SQMC on autonomous positioning problems.Comment: 5 pages, 4 figure
    • …
    corecore