282 research outputs found

    Computable error bounds for quasi-Monte Carlo using points with non-negative local discrepancy

    Full text link
    Let f:[0,1]d→Rf:[0,1]^d\to\mathbb{R} be a completely monotone integrand as defined by Aistleitner and Dick (2015) and let points x0,…,xn−1∈[0,1]d\boldsymbol{x}_0,\dots,\boldsymbol{x}_{n-1}\in[0,1]^d have a non-negative local discrepancy (NNLD) everywhere in [0,1]d[0,1]^d. We show how to use these properties to get a non-asymptotic and computable upper bound for the integral of ff over [0,1]d[0,1]^d. An analogous non-positive local discrepancy (NPLD) property provides a computable lower bound. It has been known since Gabai (1967) that the two dimensional Hammersley points in any base b≥2b\ge2 have non-negative local discrepancy. Using the probabilistic notion of associated random variables, we generalize Gabai's finding to digital nets in any base b≥2b\ge2 and any dimension d≥1d\ge1 when the generator matrices are permutation matrices. We show that permutation matrices cannot attain the best values of the digital net quality parameter when d≥3d\ge3. As a consequence the computable absolutely sure bounds we provide come with less accurate estimates than the usual digital net estimates do in high dimensions. We are also able to construct high dimensional rank one lattice rules that are NNLD. We show that those lattices do not have good discrepancy properties: any lattice rule with the NNLD property in dimension d≥2d\ge2 either fails to be projection regular or has all its points on the main diagonal

    Randomized Algorithms for High-Dimensional Integration and Approximation

    Get PDF
    We prove upper and lower error bounds for error of the randomized Smolyak algorithm and provide a thorough case study of applying the randomized Smolyak algorithm with the building blocks being quadratures based on scrambled nets for integration of functions coming from Haar-wavelets spaces. Moreover, we discuss different notions of negative dependence of randomized point sets which find applications in discrepancy theory and randomized quasi-Monte Carlo integration

    A universal median quasi-Monte Carlo integration

    Full text link
    We study quasi-Monte Carlo (QMC) integration over the multi-dimensional unit cube in several weighted function spaces with different smoothness classes. We consider approximating the integral of a function by the median of several integral estimates under independent and random choices of the underlying QMC point sets (either linearly scrambled digital nets or infinite-precision polynomial lattice point sets). Even though our approach does not require any information on the smoothness and weights of a target function space as an input, we can prove a probabilistic upper bound on the worst-case error for the respective weighted function space, where the failure probability converges to 0 exponentially fast as the number of estimates increases. Our obtained rates of convergence are nearly optimal for function spaces with finite smoothness, and we can attain a dimension-independent super-polynomial convergence for a class of infinitely differentiable functions. This implies that our median-based QMC rule is universal in the sense that it does not need to be adjusted to the smoothness and the weights of the function spaces and yet exhibits the nearly optimal rate of convergence. Numerical experiments support our theoretical results.Comment: Major revision, 32 pages, 4 figure
    • …
    corecore