45 research outputs found

    Application of Sequential Quasi-Monte Carlo to Autonomous Positioning

    Full text link
    Sequential Monte Carlo algorithms (also known as particle filters) are popular methods to approximate filtering (and related) distributions of state-space models. However, they converge at the slow 1/N1/\sqrt{N} rate, which may be an issue in real-time data-intensive scenarios. We give a brief outline of SQMC (Sequential Quasi-Monte Carlo), a variant of SMC based on low-discrepancy point sets proposed by Gerber and Chopin (2015), which converges at a faster rate, and we illustrate the greater performance of SQMC on autonomous positioning problems.Comment: 5 pages, 4 figure

    Improved bounds on the gain coefficients for digital nets in prime power base

    Full text link
    We study randomized quasi-Monte Carlo integration by scrambled nets. The scrambled net quadrature has long gained its popularity because it is an unbiased estimator of the true integral, allows for a practical error estimation, achieves a high order decay of the variance for smooth functions, and works even for LpL^p-functions with any p≥1p\geq 1. The variance of the scrambled net quadrature for L2L^2-functions can be evaluated through the set of the so-called gain coefficients. In this paper, based on the system of Walsh functions and the concept of dual nets, we provide improved upper bounds on the gain coefficients for digital nets in general prime power base. Our results explain the known bound by Owen (1997) for Faure sequences, the recently improved bound by Pan and Owen (2021) for digital nets in base 2 (including Sobol' sequences as a special case), and their finding that all the nonzero gain coefficients for digital nets in base 2 must be powers of two, all in a unified way.Comment: minor revision, 14 page

    The Discrepancy and Gain Coefficients of Scrambled Digital Nets

    Get PDF
    AbstractDigital sequences and nets are among the most popular kinds of low discrepancy sequences and sets and are often used for quasi-Monte Carlo quadrature rules. Several years ago Owen proposed a method of scrambling digital sequences and recently Faure and Tezuka have proposed another method. This article considers the discrepancy of digital nets under these scramblings. The first main result of this article is a formula for the discrepancy of a scrambled digital (λ, t, m, s)-net in base b with n=λbm points that requires only O(n) operations to evaluate. The second main result is exact formulas for the gain coefficients of a digital (t, m, s)-net in terms of its generator matrices. The gain coefficients, as defined by Owen, determine both the worst-case and random-case analyses of quadrature error

    On Integration Methods Based on Scrambled Nets of Arbitrary Size

    Full text link
    We consider the problem of evaluating I(φ):=∫[0,1)sφ(x)dxI(\varphi):=\int_{[0,1)^s}\varphi(x) dx for a function φ∈L2[0,1)s\varphi \in L^2[0,1)^{s}. In situations where I(φ)I(\varphi) can be approximated by an estimate of the form N−1∑n=0N−1φ(xn)N^{-1}\sum_{n=0}^{N-1}\varphi(x^n), with {xn}n=0N−1\{x^n\}_{n=0}^{N-1} a point set in [0,1)s[0,1)^s, it is now well known that the OP(N−1/2)O_P(N^{-1/2}) Monte Carlo convergence rate can be improved by taking for {xn}n=0N−1\{x^n\}_{n=0}^{N-1} the first N=λbmN=\lambda b^m points, λ∈{1,…,b−1}\lambda\in\{1,\dots,b-1\}, of a scrambled (t,s)(t,s)-sequence in base b≥2b\geq 2. In this paper we derive a bound for the variance of scrambled net quadrature rules which is of order o(N−1)o(N^{-1}) without any restriction on NN. As a corollary, this bound allows us to provide simple conditions to get, for any pattern of NN, an integration error of size oP(N−1/2)o_P(N^{-1/2}) for functions that depend on the quadrature size NN. Notably, we establish that sequential quasi-Monte Carlo (M. Gerber and N. Chopin, 2015, \emph{J. R. Statist. Soc. B, to appear.}) reaches the oP(N−1/2)o_P(N^{-1/2}) convergence rate for any values of NN. In a numerical study, we show that for scrambled net quadrature rules we can relax the constraint on NN without any loss of efficiency when the integrand φ\varphi is a discontinuous function while, for sequential quasi-Monte Carlo, taking N=λbmN=\lambda b^m may only provide moderate gains.Comment: 27 pages, 2 figures (final version, to appear in The Journal of Complexity

    Local antithetic sampling with scrambled nets

    Full text link
    We consider the problem of computing an approximation to the integral I=∫[0,1]df(x)dxI=\int_{[0,1]^d}f(x) dx. Monte Carlo (MC) sampling typically attains a root mean squared error (RMSE) of O(n−1/2)O(n^{-1/2}) from nn independent random function evaluations. By contrast, quasi-Monte Carlo (QMC) sampling using carefully equispaced evaluation points can attain the rate O(n−1+ε)O(n^{-1+\varepsilon}) for any ε>0\varepsilon>0 and randomized QMC (RQMC) can attain the RMSE O(n−3/2+ε)O(n^{-3/2+\varepsilon}), both under mild conditions on ff. Classical variance reduction methods for MC can be adapted to QMC. Published results combining QMC with importance sampling and with control variates have found worthwhile improvements, but no change in the error rate. This paper extends the classical variance reduction method of antithetic sampling and combines it with RQMC. One such method is shown to bring a modest improvement in the RMSE rate, attaining O(n−3/2−1/d+ε)O(n^{-3/2-1/d+\varepsilon}) for any ε>0\varepsilon>0, for smooth enough ff.Comment: Published in at http://dx.doi.org/10.1214/07-AOS548 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org
    corecore