7,151 research outputs found
Tensor field interpolation with PDEs
We present a unified framework for interpolation and regularisation of scalar- and tensor-valued images. This framework is based on elliptic partial differential equations (PDEs) and allows rotationally invariant models. Since it does not require a regular grid, it can also be used for tensor-valued scattered data interpolation and for tensor field inpainting. By choosing suitable differential operators, interpolation methods using radial basis functions are covered. Our experiments show that a novel interpolation technique based on anisotropic diffusion with a diffusion tensor should be favoured: It outperforms interpolants with radial basis functions, it allows discontinuity-preserving interpolation with no additional oscillations, and it respects positive semidefiniteness of the input tensor data
Polynomial Chaos Expansion of random coefficients and the solution of stochastic partial differential equations in the Tensor Train format
We apply the Tensor Train (TT) decomposition to construct the tensor product
Polynomial Chaos Expansion (PCE) of a random field, to solve the stochastic
elliptic diffusion PDE with the stochastic Galerkin discretization, and to
compute some quantities of interest (mean, variance, exceedance probabilities).
We assume that the random diffusion coefficient is given as a smooth
transformation of a Gaussian random field. In this case, the PCE is delivered
by a complicated formula, which lacks an analytic TT representation. To
construct its TT approximation numerically, we develop the new block TT cross
algorithm, a method that computes the whole TT decomposition from a few
evaluations of the PCE formula. The new method is conceptually similar to the
adaptive cross approximation in the TT format, but is more efficient when
several tensors must be stored in the same TT representation, which is the case
for the PCE. Besides, we demonstrate how to assemble the stochastic Galerkin
matrix and to compute the solution of the elliptic equation and its
post-processing, staying in the TT format.
We compare our technique with the traditional sparse polynomial chaos and the
Monte Carlo approaches. In the tensor product polynomial chaos, the polynomial
degree is bounded for each random variable independently. This provides higher
accuracy than the sparse polynomial set or the Monte Carlo method, but the
cardinality of the tensor product set grows exponentially with the number of
random variables. However, when the PCE coefficients are implicitly
approximated in the TT format, the computations with the full tensor product
polynomial set become possible. In the numerical experiments, we confirm that
the new methodology is competitive in a wide range of parameters, especially
where high accuracy and high polynomial degrees are required.Comment: This is a major revision of the manuscript arXiv:1406.2816 with
significantly extended numerical experiments. Some unused material is remove
Adaptive stochastic Galerkin FEM for lognormal coefficients in hierarchical tensor representations
Stochastic Galerkin methods for non-affine coefficient representations are
known to cause major difficulties from theoretical and numerical points of
view. In this work, an adaptive Galerkin FE method for linear parametric PDEs
with lognormal coefficients discretized in Hermite chaos polynomials is
derived. It employs problem-adapted function spaces to ensure solvability of
the variational formulation. The inherently high computational complexity of
the parametric operator is made tractable by using hierarchical tensor
representations. For this, a new tensor train format of the lognormal
coefficient is derived and verified numerically. The central novelty is the
derivation of a reliable residual-based a posteriori error estimator. This can
be regarded as a unique feature of stochastic Galerkin methods. It allows for
an adaptive algorithm to steer the refinements of the physical mesh and the
anisotropic Wiener chaos polynomial degrees. For the evaluation of the error
estimator to become feasible, a numerically efficient tensor format
discretization is developed. Benchmark examples with unbounded lognormal
coefficient fields illustrate the performance of the proposed Galerkin
discretization and the fully adaptive algorithm
A Dynamically Adaptive Sparse Grid Method for Quasi-Optimal Interpolation of Multidimensional Analytic Functions
In this work we develop a dynamically adaptive sparse grids (SG) method for
quasi-optimal interpolation of multidimensional analytic functions defined over
a product of one dimensional bounded domains. The goal of such approach is to
construct an interpolant in space that corresponds to the "best -terms"
based on sharp a priori estimate of polynomial coefficients. In the past, SG
methods have been successful in achieving this, with a traditional construction
that relies on the solution to a Knapsack problem: only the most profitable
hierarchical surpluses are added to the SG. However, this approach requires
additional sharp estimates related to the size of the analytic region and the
norm of the interpolation operator, i.e., the Lebesgue constant. Instead, we
present an iterative SG procedure that adaptively refines an estimate of the
region and accounts for the effects of the Lebesgue constant. Our approach does
not require any a priori knowledge of the analyticity or operator norm, is
easily generalized to both affine and non-affine analytic functions, and can be
applied to sparse grids build from one dimensional rules with arbitrary growth
of the number of nodes. In several numerical examples, we utilize our
dynamically adaptive SG to interpolate quantities of interest related to the
solutions of parametrized elliptic and hyperbolic PDEs, and compare the
performance of our quasi-optimal interpolant to several alternative SG schemes
Approximation of tensor fields on surfaces of arbitrary topology based on local Monge parametrizations
We introduce a new method, the Local Monge Parametrizations (LMP) method, to
approximate tensor fields on general surfaces given by a collection of local
parametrizations, e.g.~as in finite element or NURBS surface representations.
Our goal is to use this method to solve numerically tensor-valued partial
differential equations (PDE) on surfaces. Previous methods use scalar
potentials to numerically describe vector fields on surfaces, at the expense of
requiring higher-order derivatives of the approximated fields and limited to
simply connected surfaces, or represent tangential tensor fields as tensor
fields in 3D subjected to constraints, thus increasing the essential number of
degrees of freedom. In contrast, the LMP method uses an optimal number of
degrees of freedom to represent a tensor, is general with regards to the
topology of the surface, and does not increase the order of the PDEs governing
the tensor fields. The main idea is to construct maps between the element
parametrizations and a local Monge parametrization around each node. We test
the LMP method by approximating in a least-squares sense different vector and
tensor fields on simply connected and genus-1 surfaces. Furthermore, we apply
the LMP method to two physical models on surfaces, involving a tension-driven
flow (vector-valued PDE) and nematic ordering (tensor-valued PDE). The LMP
method thus solves the long-standing problem of the interpolation of tensors on
general surfaces with an optimal number of degrees of freedom.Comment: 16 pages, 6 figure
Sparse approximation of multilinear problems with applications to kernel-based methods in UQ
We provide a framework for the sparse approximation of multilinear problems
and show that several problems in uncertainty quantification fit within this
framework. In these problems, the value of a multilinear map has to be
approximated using approximations of different accuracy and computational work
of the arguments of this map. We propose and analyze a generalized version of
Smolyak's algorithm, which provides sparse approximation formulas with
convergence rates that mitigate the curse of dimension that appears in
multilinear approximation problems with a large number of arguments. We apply
the general framework to response surface approximation and optimization under
uncertainty for parametric partial differential equations using kernel-based
approximation. The theoretical results are supplemented by numerical
experiments
Symmetry without Symmetry: Numerical Simulation of Axisymmetric Systems using Cartesian Grids
We present a new technique for the numerical simulation of axisymmetric
systems. This technique avoids the coordinate singularities which often arise
when cylindrical or polar-spherical coordinate finite difference grids are
used, particularly in simulating tensor partial differential equations like
those of 3+1 numerical relativity. For a system axisymmetric about the z axis,
the basic idea is to use a 3-dimensional Cartesian (x,y,z) coordinate grid
which covers (say) the y=0 plane, but is only one
finite-difference-molecule--width thick in the y direction. The field variables
in the central y=0 grid plane can be updated using normal (x,y,z)--coordinate
finite differencing, while those in the y \neq 0 grid planes can be computed
from those in the central plane by using the axisymmetry assumption and
interpolation. We demonstrate the effectiveness of the approach on a set of
fully nonlinear test computations in 3+1 numerical general relativity,
involving both black holes and collapsing gravitational waves.Comment: 17 pages, 4 figure
- …