3,941 research outputs found
Local and Dimension Adaptive Sparse Grid Interpolation and Quadrature
In this paper we present a locally and dimension-adaptive sparse grid method
for interpolation and integration of high-dimensional functions with
discontinuities. The proposed algorithm combines the strengths of the
generalised sparse grid algorithm and hierarchical surplus-guided local
adaptivity. A high-degree basis is used to obtain a high-order method which,
given sufficient smoothness, performs significantly better than the
piecewise-linear basis. The underlying generalised sparse grid algorithm
greedily selects the dimensions and variable interactions that contribute most
to the variability of a function. The hierarchical surplus of points within the
sparse grid is used as an error criterion for local refinement with the aim of
concentrating computational effort within rapidly varying or discontinuous
regions. This approach limits the number of points that are invested in
`unimportant' dimensions and regions within the high-dimensional domain. We
show the utility of the proposed method for non-smooth functions with hundreds
of variables
Smoothing the payoff for efficient computation of Basket option prices
We consider the problem of pricing basket options in a multivariate Black
Scholes or Variance Gamma model. From a numerical point of view, pricing such
options corresponds to moderate and high dimensional numerical integration
problems with non-smooth integrands. Due to this lack of regularity, higher
order numerical integration techniques may not be directly available, requiring
the use of methods like Monte Carlo specifically designed to work for
non-regular problems. We propose to use the inherent smoothing property of the
density of the underlying in the above models to mollify the payoff function by
means of an exact conditional expectation. The resulting conditional
expectation is unbiased and yields a smooth integrand, which is amenable to the
efficient use of adaptive sparse grid cubature. Numerical examples indicate
that the high-order method may perform orders of magnitude faster compared to
Monte Carlo or Quasi Monte Carlo in dimensions up to 35
Spectral tensor-train decomposition
The accurate approximation of high-dimensional functions is an essential task
in uncertainty quantification and many other fields. We propose a new function
approximation scheme based on a spectral extension of the tensor-train (TT)
decomposition. We first define a functional version of the TT decomposition and
analyze its properties. We obtain results on the convergence of the
decomposition, revealing links between the regularity of the function, the
dimension of the input space, and the TT ranks. We also show that the
regularity of the target function is preserved by the univariate functions
(i.e., the "cores") comprising the functional TT decomposition. This result
motivates an approximation scheme employing polynomial approximations of the
cores. For functions with appropriate regularity, the resulting
\textit{spectral tensor-train decomposition} combines the favorable
dimension-scaling of the TT decomposition with the spectral convergence rate of
polynomial approximations, yielding efficient and accurate surrogates for
high-dimensional functions. To construct these decompositions, we use the
sampling algorithm \texttt{TT-DMRG-cross} to obtain the TT decomposition of
tensors resulting from suitable discretizations of the target function. We
assess the performance of the method on a range of numerical examples: a
modifed set of Genz functions with dimension up to , and functions with
mixed Fourier modes or with local features. We observe significant improvements
in performance over an anisotropic adaptive Smolyak approach. The method is
also used to approximate the solution of an elliptic PDE with random input
data. The open source software and examples presented in this work are
available online.Comment: 33 pages, 19 figure
- …