767 research outputs found
Quasi-Monte Carlo methods for high-dimensional integration: the standard (weighted Hilbert space) setting and beyond
This paper is a contemporary review of quasi-Monte Carlo (QMC) methods, that is, equal-weight rules for the approximate evaluation of high-dimensional integrals over the unit cube . It first introduces the by-now standard setting of weighted Hilbert spaces of functions with square-integrable mixed first derivatives, and then indicates alternative settings, such as non-Hilbert spaces, that can sometimes be more suitable. Original contributions include the extension of the fast component-by-component (CBC) construction of lattice rules that achieve the optimal convergence order (a rate of almost , where is the number of points, independently of dimension) to so-called ââŹĹproduct and order dependentâ⏠(POD) weights, as seen in some recent applications. Although the paper has a strong focus on lattice rules, the function space settings are applicable to all QMC methods. Furthermore, the error analysis and construction of lattice rules can be adapted to polynomial lattice rules from the family of digital nets.
doi:10.1017/S144618111200007
Construction of lattice rules for multiple integration based on a weighted discrepancy
High-dimensional integrals arise in a variety of areas, including quantum physics, the physics and chemistry of molecules, statistical mechanics and more recently, in financial applications.
In order to approximate multidimensional integrals, one may use Monte Carlo methods in which the quadrature points are generated randomly or quasi-Monte Carlo methods, in which points are generated deterministically. One particular class of quasi-Monte Carlo methods for multivariate integration is represented by lattice rules. Lattice rules constructed throughout this thesis allow good approximations to integrals of functions belonging to certain
weighted function spaces. These function spaces were proposed as an explanation as to why integrals in many variables appear to be successfully approximated although the standard theory indicates that the number of quadrature
points required for reasonable accuracy would be astronomical because of the large number of variables.
The purpose of this thesis is to contribute to theoretical results regarding the construction of lattice rules for multiple integration. We consider both lattice rules for integrals over the unit cube and lattice rules suitable for integrals over Euclidean space. The research reported throughout the thesis is devoted to finding the generating vector required to produce lattice rules that have what is termed a low weighted discrepancy . In simple terms, the discrepancy is a measure of the uniformity of the distribution of the quadrature points or in other settings, a worst-case error. One of the assumptions used in these weighted function spaces is that variables are arranged in the decreasing order of their importance and the assignment of weights in this situation results in so-called product weights . In other applications it is rather the importance of group of variables that matters. This situation is modelled by using function spaces in which the weights are general . In the weighted
settings mentioned above, the quality of the lattice rules is assessed by the weighted discrepancy mentioned earlier. Under appropriate conditions on the weights, the lattice rules constructed here produce a convergence rate of the error that ranges
from O(nâ1/2) to the (believed) optimal O(nâ1+δ) for any δ gt 0, with the involved constant independent of the dimension
Recommended from our members
Entropy, Randomization, Derandomization, and Discrepancy
The star discrepancy is a measure of how uniformly distributed a finite point set is in the d-dimensional unit cube. It is related to high-dimensional numerical integration of certain function classes as expressed by the Koksma-Hlawka inequality. A sharp version of this inequality states that the worst-case error of approximating the integral of functions from the unit ball of some Sobolev space by an equal-weight cubature is exactly the star discrepancy of the set of sample points. In many applications, as, e.g., in physics, quantum chemistry or finance, it is essential to approximate high-dimensional integrals. Thus with regard to the Koksma- Hlawka inequality the following three questions are very important: (i) What are good bounds with explicitly given dependence on the dimension d for the smallest possible discrepancy of any n-point set for moderate n? (ii) How can we construct point sets efficiently that satisfy such bounds? (iii) How can we calculate the discrepancy of given point sets efficiently? We want to discuss these questions and survey and explain some approaches to tackle them relying on metric entropy, randomization, and derandomization
Recommended from our members
Infinite-Dimensional Integration on Weighted Hilbert Spaces
We study the numerical integration problem for functions with infinitely many variables. The functions we want to integrate are from a reproducing kernel Hilbert space which is endowed with a weighted norm. We study the worst case Îľ-complexity which is defined as the minimal cost among all algorithms whose worst case error over the Hilbert space unit ball is at most Îľ. Here we assume that the cost of evaluating a function depends polynomially on the number of active variables. The infinite-dimensional integration problem is (polynomially) tractable if the Îľ-complexity is bounded by a constant times a power of 1/Îľ. The smallest such power is called the exponent of tractability. First we study finite-order weights. We provide improved lower bounds for the exponent of tractability for general finite-order weights and improved upper bounds for three newly defined classes of finite-order weights. The constructive upper bounds are obtained by multilevel algorithms that use for each level quasi-Monte Carlo integration points whose projections onto specific sets of coordinates exhibit a small discrepancy. The newly defined finite-intersection weights model the situation where each group of variables interacts with at most Ď other groups of variables, where Ď is some fixed number. For these weights we obtain a sharp upper bound. This is the first class of weights for which the exact exponent of tractability is known for any possible decay of the weights and for any polynomial degree of the cost function. For the other two classes of finite-order weights our upper bounds are sharp if, e.g., the decay of the weights is fast or slow enough. We extend our analysis to the case of arbitrary weights. In particular, from our results for finite-order weights, we conclude a lower bound on the exponent of tractability for arbitrary weights and a constructive upper bound for product weights. Although we confine ourselves for simplicity to explicit upper bounds for four classes of weights, we stress that our multilevel algorithm together with our default choice of quasi-Monte Carlo points is applicable to any class of weights
Randomized Algorithms for High-Dimensional Integration and Approximation
We prove upper and lower error bounds for error of the randomized Smolyak algorithm and provide a thorough case study of applying the randomized Smolyak algorithm with the building blocks being quadratures based on scrambled nets for integration of functions coming from Haar-wavelets spaces. Moreover, we discuss different notions of negative dependence of randomized point sets which find applications in discrepancy theory and randomized quasi-Monte Carlo integration
Quadrature methods for elliptic PDEs with random diffusion
In this thesis, we consider elliptic boundary value problems with
random diffusion coefficients. Such equations arise in many
engineering applications, for example, in the modelling of
subsurface flows in porous media, such as rocks.
To describe the subsurface flow, it is convenient to use
Darcy's law. The key ingredient in this approach is the hydraulic
conductivity. In most cases, this hydraulic conductivity is approximated
from a discrete number of measurements and, hence, it is common to
endow it with uncertainty, i.e. model it as a random field.
This random field is usually characterized
by its mean field and its covariance function.
Naturally, this randomness propagates through the model which
yields that the solution is a random field as well.
The thesis on hand is concerned with the effective computation
of statistical quantities of this random solution, like the expectation,
the variance, and higher order moments.
In order to compute these quantities, a suitable representation of the
random field which describes the hydraulic conductivity needs to be
computed from the mean field and the covariance function.
This is realized by the Karhunen-Loeve expansion which
separates the spatial variable and the stochastic variable. In general, the
number of random variables and spatial functions used in this expansion
is infinite and needs to be truncated appropriately.
The number of random variables which are required depends on the
smoothness of the covariance function and grows with the desired accuracy.
Since the solution also depends on these random variables, each moment
of the solution appears as a high-dimensional Bochner integral over the
image space of the collection of random variables. This integral has to be
approximated by quadrature methods where each function evaluation
corresponds to a PDE solve.
In this thesis, the Monte Carlo, quasi-Monte Carlo, Gaussian tensor product, and
Gaussian sparse grid quadrature is analyzed to deal with this high-dimensional
integration problem.
In the first part, the necessary regularity requirements of the integrand and
its powers are provided in order to guarantee convergence of the different
methods.
It turns out that all the powers of the solution depend, like the solution itself,
anisotropic on the different random variables which means in this case that
there is a decaying dependence on the different random variables.
This dependence can be used to overcome, at least up to a certain extent, the
curse of dimensionality of the quadrature problem.
This is reflected in the proofs of the convergence rates of the different
quadrature methods which can be found in the second part of this thesis.
The last part is concerned with multilevel quadrature approaches to keep
the computational cost low. As mentioned earlier, we need to solve a partial
differential equation for each quadrature point.
The common approach is to apply a finite element approximation scheme on
a refinement level which corresponds to the desired accuracy.
Hence, the total computational cost is given by the product of the number
of quadrature points times the cost to compute one finite element solution
on a relatively high refinement level.
The multilevel idea is to use a telescoping sum decomposition of the quantity
of interest with respect to different spatial refinement levels and use
quadrature methods with different accuracies for each summand.
Roughly speaking, the multilevel approach spends a lot of quadrature points
on a low spatial refinement and only a few on the higher refinement levels.
This reduces the computational complexity but requires further regularity
on the integrand which is proven for the considered problems in this thesis
- âŚ