264 research outputs found

    Local antithetic sampling with scrambled nets

    Full text link
    We consider the problem of computing an approximation to the integral I=∫[0,1]df(x)dxI=\int_{[0,1]^d}f(x) dx. Monte Carlo (MC) sampling typically attains a root mean squared error (RMSE) of O(n−1/2)O(n^{-1/2}) from nn independent random function evaluations. By contrast, quasi-Monte Carlo (QMC) sampling using carefully equispaced evaluation points can attain the rate O(n−1+ε)O(n^{-1+\varepsilon}) for any ε>0\varepsilon>0 and randomized QMC (RQMC) can attain the RMSE O(n−3/2+ε)O(n^{-3/2+\varepsilon}), both under mild conditions on ff. Classical variance reduction methods for MC can be adapted to QMC. Published results combining QMC with importance sampling and with control variates have found worthwhile improvements, but no change in the error rate. This paper extends the classical variance reduction method of antithetic sampling and combines it with RQMC. One such method is shown to bring a modest improvement in the RMSE rate, attaining O(n−3/2−1/d+ε)O(n^{-3/2-1/d+\varepsilon}) for any ε>0\varepsilon>0, for smooth enough ff.Comment: Published in at http://dx.doi.org/10.1214/07-AOS548 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    The Cost of Numerical Integration in Statistical Decision-theoretic Methods for Robust Design Optimization

    Get PDF
    The Bayes principle from statistical decision theory provides a conceptual framework for quantifying uncertainties that arise in robust design optimization. The difficulty with exploiting this framework is computational, as it leads to objective and constraint functions that must be evaluated by numerical integration. Using a prototypical robust design optimization problem, this study explores the computational cost of multidimensional integration (computing expectation) and its interplay with optimization algorithms. It concludes that straightforward application of standard off-the-shelf optimization software to robust design is prohibitively expensive, necessitating adaptive strategies and the use of surrogates

    Multidimensional integration through Markovian sampling under steered function morphing: a physical guise from statistical mechanics

    Full text link
    We present a computational strategy for the evaluation of multidimensional integrals on hyper-rectangles based on Markovian stochastic exploration of the integration domain while the integrand is being morphed by starting from an initial appropriate profile. Thanks to an abstract reformulation of Jarzynski's equality applied in stochastic thermodynamics to evaluate the free-energy profiles along selected reaction coordinates via non-equilibrium transformations, it is possible to cast the original integral into the exponential average of the distribution of the pseudo-work (that we may term "computational work") involved in doing the function morphing, which is straightforwardly solved. Several tests illustrate the basic implementation of the idea, and show its performance in terms of computational time, accuracy and precision. The formulation for integrand functions with zeros and possible sign changes is also presented. It will be stressed that our usage of Jarzynski's equality shares similarities with a practice already known in statistics as Annealed Importance Sampling (AIS), when applied to computation of the normalizing constants of distributions. In a sense, here we dress the AIS with its "physical" counterpart borrowed from statistical mechanics.Comment: 3 figures Supplementary Material (pdf file named "JEMDI_SI.pdf"

    Non-Intrusive, High-Dimensional Uncertainty Quantification for the Robust Simulation of Fluid Flows

    Get PDF
    Uncertainty Quantification is the field of mathematics that focuses on the propagation and influence of uncertainties on models. Mostly complex numerical models are considered with uncertain parameters or uncertain model properties. Several methods exist to model the uncertain parameters of numerical models. Stochastic Collocation is a method that samples the random variables of the input parameters using a deterministic procedure and then interpolates or integrates the unknown quantity of interest using the samples. Because moments of the distribution of the unknown quantity are essentially integrals of the quantity, the main focus will be on calculating integrals. Calculating an integral using samples can be done efficiently using a quadrature or cubature rule. Both sample the space of integration in a deterministic way and several algorithms to determine the samples exist, each with its own advantages and disadvantages. In the one-dimensional case a method is proposed that has all relevant advantages (positive weights, nested points and dependency on the input distribution). The principle of the introduced quadrature rule can also be applied to a multi-dimensional setting. However, if negative weights are allowed in the multi-dimensional case a cubature rule can be generated that has a very small number of points compared to the methods described in literature. The new method uses the fact that the tensor product of several quadrature rules has many points with the same weight that can be considered as on

    Scalable Environment for Quantification of Uncertainty and Optimization in Industrial Applications (SEQUOIA)

    Full text link
    Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/143027/1/6.2017-1327.pd

    Nonlinear approximation in bounded orthonormal product bases

    Full text link
    We present a dimension-incremental algorithm for the nonlinear approximation of high-dimensional functions in an arbitrary bounded orthonormal product basis. Our goal is to detect a suitable truncation of the basis expansion of the function, where the corresponding basis support is assumed to be unknown. Our method is based on point evaluations of the considered function and adaptively builds an index set of a suitable basis support such that the approximately largest basis coefficients are still included. For this purpose, the algorithm only needs a suitable search space that contains the desired index set. Throughout the work, there are various minor modifications of the algorithm discussed as well, which may yield additional benefits in several situations. For the first time, we provide a proof of a detection guarantee for such an index set in the function approximation case under certain assumptions on the sub-methods used within our algorithm, which can be used as a foundation for similar statements in various other situations as well. Some numerical examples in different settings underline the effectiveness and accuracy of our method
    • …
    corecore