6,529 research outputs found
Efficient adaptive integration of functions with sharp gradients and cusps in n-dimensional parallelepipeds
In this paper, we study the efficient numerical integration of functions with
sharp gradients and cusps. An adaptive integration algorithm is presented that
systematically improves the accuracy of the integration of a set of functions.
The algorithm is based on a divide and conquer strategy and is independent of
the location of the sharp gradient or cusp. The error analysis reveals that for
a function (derivative-discontinuity at a point), a rate of convergence
of is obtained in . Two applications of the adaptive integration
scheme are studied. First, we use the adaptive quadratures for the integration
of the regularized Heaviside function---a strongly localized function that is
used for modeling sharp gradients. Then, the adaptive quadratures are employed
in the enriched finite element solution of the all-electron Coulomb problem in
crystalline diamond. The source term and enrichment functions of this problem
have sharp gradients and cusps at the nuclei. We show that the optimal rate of
convergence is obtained with only a marginal increase in the number of
integration points with respect to the pure finite element solution with the
same number of elements. The adaptive integration scheme is simple, robust, and
directly applicable to any generalized finite element method employing
enrichments with sharp local variations or cusps in -dimensional
parallelepiped elements.Comment: 22 page
Sparse Quadrature for High-Dimensional Integration with Gaussian Measure
In this work we analyze the dimension-independent convergence property of an
abstract sparse quadrature scheme for numerical integration of functions of
high-dimensional parameters with Gaussian measure. Under certain assumptions of
the exactness and the boundedness of univariate quadrature rules as well as the
regularity of the parametric functions with respect to the parameters, we
obtain the convergence rate , where is the number of indices,
and is independent of the number of the parameter dimensions. Moreover, we
propose both an a-priori and an a-posteriori schemes for the construction of a
practical sparse quadrature rule and perform numerical experiments to
demonstrate their dimension-independent convergence rates
A continuous analogue of the tensor-train decomposition
We develop new approximation algorithms and data structures for representing
and computing with multivariate functions using the functional tensor-train
(FT), a continuous extension of the tensor-train (TT) decomposition. The FT
represents functions using a tensor-train ansatz by replacing the
three-dimensional TT cores with univariate matrix-valued functions. The main
contribution of this paper is a framework to compute the FT that employs
adaptive approximations of univariate fibers, and that is not tied to any
tensorized discretization. The algorithm can be coupled with any univariate
linear or nonlinear approximation procedure. We demonstrate that this approach
can generate multivariate function approximations that are several orders of
magnitude more accurate, for the same cost, than those based on the
conventional approach of compressing the coefficient tensor of a tensor-product
basis. Our approach is in the spirit of other continuous computation packages
such as Chebfun, and yields an algorithm which requires the computation of
"continuous" matrix factorizations such as the LU and QR decompositions of
vector-valued functions. To support these developments, we describe continuous
versions of an approximate maximum-volume cross approximation algorithm and of
a rounding algorithm that re-approximates an FT by one of lower ranks. We
demonstrate that our technique improves accuracy and robustness, compared to TT
and quantics-TT approaches with fixed parameterizations, of high-dimensional
integration, differentiation, and approximation of functions with local
features such as discontinuities and other nonlinearities
Adaptive stochastic Galerkin FEM for lognormal coefficients in hierarchical tensor representations
Stochastic Galerkin methods for non-affine coefficient representations are
known to cause major difficulties from theoretical and numerical points of
view. In this work, an adaptive Galerkin FE method for linear parametric PDEs
with lognormal coefficients discretized in Hermite chaos polynomials is
derived. It employs problem-adapted function spaces to ensure solvability of
the variational formulation. The inherently high computational complexity of
the parametric operator is made tractable by using hierarchical tensor
representations. For this, a new tensor train format of the lognormal
coefficient is derived and verified numerically. The central novelty is the
derivation of a reliable residual-based a posteriori error estimator. This can
be regarded as a unique feature of stochastic Galerkin methods. It allows for
an adaptive algorithm to steer the refinements of the physical mesh and the
anisotropic Wiener chaos polynomial degrees. For the evaluation of the error
estimator to become feasible, a numerically efficient tensor format
discretization is developed. Benchmark examples with unbounded lognormal
coefficient fields illustrate the performance of the proposed Galerkin
discretization and the fully adaptive algorithm
Enabling High-Dimensional Hierarchical Uncertainty Quantification by ANOVA and Tensor-Train Decomposition
Hierarchical uncertainty quantification can reduce the computational cost of
stochastic circuit simulation by employing spectral methods at different
levels. This paper presents an efficient framework to simulate hierarchically
some challenging stochastic circuits/systems that include high-dimensional
subsystems. Due to the high parameter dimensionality, it is challenging to both
extract surrogate models at the low level of the design hierarchy and to handle
them in the high-level simulation. In this paper, we develop an efficient
ANOVA-based stochastic circuit/MEMS simulator to extract efficiently the
surrogate models at the low level. In order to avoid the curse of
dimensionality, we employ tensor-train decomposition at the high level to
construct the basis functions and Gauss quadrature points. As a demonstration,
we verify our algorithm on a stochastic oscillator with four MEMS capacitors
and 184 random parameters. This challenging example is simulated efficiently by
our simulator at the cost of only 10 minutes in MATLAB on a regular personal
computer.Comment: 14 pages (IEEE double column), 11 figure, accepted by IEEE Trans CAD
of Integrated Circuits and System
Spectral tensor-train decomposition
The accurate approximation of high-dimensional functions is an essential task
in uncertainty quantification and many other fields. We propose a new function
approximation scheme based on a spectral extension of the tensor-train (TT)
decomposition. We first define a functional version of the TT decomposition and
analyze its properties. We obtain results on the convergence of the
decomposition, revealing links between the regularity of the function, the
dimension of the input space, and the TT ranks. We also show that the
regularity of the target function is preserved by the univariate functions
(i.e., the "cores") comprising the functional TT decomposition. This result
motivates an approximation scheme employing polynomial approximations of the
cores. For functions with appropriate regularity, the resulting
\textit{spectral tensor-train decomposition} combines the favorable
dimension-scaling of the TT decomposition with the spectral convergence rate of
polynomial approximations, yielding efficient and accurate surrogates for
high-dimensional functions. To construct these decompositions, we use the
sampling algorithm \texttt{TT-DMRG-cross} to obtain the TT decomposition of
tensors resulting from suitable discretizations of the target function. We
assess the performance of the method on a range of numerical examples: a
modifed set of Genz functions with dimension up to , and functions with
mixed Fourier modes or with local features. We observe significant improvements
in performance over an anisotropic adaptive Smolyak approach. The method is
also used to approximate the solution of an elliptic PDE with random input
data. The open source software and examples presented in this work are
available online.Comment: 33 pages, 19 figure
Sparse grid quadrature on products of spheres
We examine sparse grid quadrature on weighted tensor products (WTP) of
reproducing kernel Hilbert spaces on products of the unit sphere, in the case
of worst case quadrature error for rules with arbitrary quadrature weights. We
describe a dimension adaptive quadrature algorithm based on an algorithm of
Hegland (2003), and also formulate a version of Wasilkowski and Wozniakowski's
WTP algorithm (1999), here called the WW algorithm. We prove that the dimension
adaptive algorithm is optimal in the sense of Dantzig (1957) and therefore no
greater in cost than the WW algorithm. Both algorithms therefore have the
optimal asymptotic rate of convergence given by Theorem 3 of Wasilkowski and
Wozniakowski (1999). A numerical example shows that, even though the asymptotic
convergence rate is optimal, if the dimension weights decay slowly enough, and
the dimensionality of the problem is large enough, the initial convergence of
the dimension adaptive algorithm can be slow.Comment: 34 pages, 6 figures. Accepted 7 January 2015 for publication in
Numerical Algorithms. Revised at page proof stage to (1) update email
address; (2) correct the accent on "Wozniakowski" on p. 7; (3) update
reference 2; (4) correct references 3, 18 and 2
- …