904 research outputs found

    On Integration Methods Based on Scrambled Nets of Arbitrary Size

    Full text link
    We consider the problem of evaluating I(φ):=∫[0,1)sφ(x)dxI(\varphi):=\int_{[0,1)^s}\varphi(x) dx for a function φ∈L2[0,1)s\varphi \in L^2[0,1)^{s}. In situations where I(φ)I(\varphi) can be approximated by an estimate of the form N−1∑n=0N−1φ(xn)N^{-1}\sum_{n=0}^{N-1}\varphi(x^n), with {xn}n=0N−1\{x^n\}_{n=0}^{N-1} a point set in [0,1)s[0,1)^s, it is now well known that the OP(N−1/2)O_P(N^{-1/2}) Monte Carlo convergence rate can be improved by taking for {xn}n=0N−1\{x^n\}_{n=0}^{N-1} the first N=λbmN=\lambda b^m points, λ∈{1,…,b−1}\lambda\in\{1,\dots,b-1\}, of a scrambled (t,s)(t,s)-sequence in base b≥2b\geq 2. In this paper we derive a bound for the variance of scrambled net quadrature rules which is of order o(N−1)o(N^{-1}) without any restriction on NN. As a corollary, this bound allows us to provide simple conditions to get, for any pattern of NN, an integration error of size oP(N−1/2)o_P(N^{-1/2}) for functions that depend on the quadrature size NN. Notably, we establish that sequential quasi-Monte Carlo (M. Gerber and N. Chopin, 2015, \emph{J. R. Statist. Soc. B, to appear.}) reaches the oP(N−1/2)o_P(N^{-1/2}) convergence rate for any values of NN. In a numerical study, we show that for scrambled net quadrature rules we can relax the constraint on NN without any loss of efficiency when the integrand φ\varphi is a discontinuous function while, for sequential quasi-Monte Carlo, taking N=λbmN=\lambda b^m may only provide moderate gains.Comment: 27 pages, 2 figures (final version, to appear in The Journal of Complexity

    Higher order scrambled digital nets achieve the optimal rate of the root mean square error for smooth integrands

    Full text link
    We study a random sampling technique to approximate integrals ∫[0,1]sf(x) dx\int_{[0,1]^s}f(\mathbf{x})\,\mathrm{d}\mathbf{x} by averaging the function at some sampling points. We focus on cases where the integrand is smooth, which is a problem which occurs in statistics. The convergence rate of the approximation error depends on the smoothness of the function ff and the sampling technique. For instance, Monte Carlo (MC) sampling yields a convergence of the root mean square error (RMSE) of order N−1/2N^{-1/2} (where NN is the number of samples) for functions ff with finite variance. Randomized QMC (RQMC), a combination of MC and quasi-Monte Carlo (QMC), achieves a RMSE of order N−3/2+εN^{-3/2+\varepsilon} under the stronger assumption that the integrand has bounded variation. A combination of RQMC with local antithetic sampling achieves a convergence of the RMSE of order N−3/2−1/s+εN^{-3/2-1/s+\varepsilon} (where s≥1s\ge1 is the dimension) for functions with mixed partial derivatives up to order two. Additional smoothness of the integrand does not improve the rate of convergence of these algorithms in general. On the other hand, it is known that without additional smoothness of the integrand it is not possible to improve the convergence rate. This paper introduces a new RQMC algorithm, for which we prove that it achieves a convergence of the root mean square error (RMSE) of order N−α−1/2+εN^{-\alpha-1/2+\varepsilon} provided the integrand satisfies the strong assumption that it has square integrable partial mixed derivatives up to order α>1\alpha>1 in each variable. Known lower bounds on the RMSE show that this rate of convergence cannot be improved in general for integrands with this smoothness. We provide numerical examples for which the RMSE converges approximately with order N−5/2N^{-5/2} and N−7/2N^{-7/2}, in accordance with the theoretical upper bound.Comment: Published in at http://dx.doi.org/10.1214/11-AOS880 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Application of Sequential Quasi-Monte Carlo to Autonomous Positioning

    Full text link
    Sequential Monte Carlo algorithms (also known as particle filters) are popular methods to approximate filtering (and related) distributions of state-space models. However, they converge at the slow 1/N1/\sqrt{N} rate, which may be an issue in real-time data-intensive scenarios. We give a brief outline of SQMC (Sequential Quasi-Monte Carlo), a variant of SMC based on low-discrepancy point sets proposed by Gerber and Chopin (2015), which converges at a faster rate, and we illustrate the greater performance of SQMC on autonomous positioning problems.Comment: 5 pages, 4 figure

    Efficient calculation of the worst-case error and (fast) component-by-component construction of higher order polynomial lattice rules

    Full text link
    We show how to obtain a fast component-by-component construction algorithm for higher order polynomial lattice rules. Such rules are useful for multivariate quadrature of high-dimensional smooth functions over the unit cube as they achieve the near optimal order of convergence. The main problem addressed in this paper is to find an efficient way of computing the worst-case error. A general algorithm is presented and explicit expressions for base~2 are given. To obtain an efficient component-by-component construction algorithm we exploit the structure of the underlying cyclic group. We compare our new higher order multivariate quadrature rules to existing quadrature rules based on higher order digital nets by computing their worst-case error. These numerical results show that the higher order polynomial lattice rules improve upon the known constructions of quasi-Monte Carlo rules based on higher order digital nets

    The Discrepancy and Gain Coefficients of Scrambled Digital Nets

    Get PDF
    AbstractDigital sequences and nets are among the most popular kinds of low discrepancy sequences and sets and are often used for quasi-Monte Carlo quadrature rules. Several years ago Owen proposed a method of scrambling digital sequences and recently Faure and Tezuka have proposed another method. This article considers the discrepancy of digital nets under these scramblings. The first main result of this article is a formula for the discrepancy of a scrambled digital (λ, t, m, s)-net in base b with n=λbm points that requires only O(n) operations to evaluate. The second main result is exact formulas for the gain coefficients of a digital (t, m, s)-net in terms of its generator matrices. The gain coefficients, as defined by Owen, determine both the worst-case and random-case analyses of quadrature error
    • …
    corecore