605 research outputs found
A literature survey of low-rank tensor approximation techniques
During the last years, low-rank tensor approximation has been established as
a new tool in scientific computing to address large-scale linear and
multilinear algebra problems, which would be intractable by classical
techniques. This survey attempts to give a literature overview of current
developments in this area, with an emphasis on function-related tensors
Solving optimal control problems governed by random Navier-Stokes equations using low-rank methods
Many problems in computational science and engineering are simultaneously
characterized by the following challenging issues: uncertainty, nonlinearity,
nonstationarity and high dimensionality. Existing numerical techniques for such
models would typically require considerable computational and storage
resources. This is the case, for instance, for an optimization problem governed
by time-dependent Navier-Stokes equations with uncertain inputs. In particular,
the stochastic Galerkin finite element method often leads to a prohibitively
high dimensional saddle-point system with tensor product structure. In this
paper, we approximate the solution by the low-rank Tensor Train decomposition,
and present a numerically efficient algorithm to solve the optimality equations
directly in the low-rank representation. We show that the solution of the
vorticity minimization problem with a distributed control admits a
representation with ranks that depend modestly on model and discretization
parameters even for high Reynolds numbers. For lower Reynolds numbers this is
also the case for a boundary control. This opens the way for a reduced-order
modeling of the stochastic optimal flow control with a moderate cost at all
stages.Comment: 29 page
Polynomial Chaos Expansion of random coefficients and the solution of stochastic partial differential equations in the Tensor Train format
We apply the Tensor Train (TT) decomposition to construct the tensor product
Polynomial Chaos Expansion (PCE) of a random field, to solve the stochastic
elliptic diffusion PDE with the stochastic Galerkin discretization, and to
compute some quantities of interest (mean, variance, exceedance probabilities).
We assume that the random diffusion coefficient is given as a smooth
transformation of a Gaussian random field. In this case, the PCE is delivered
by a complicated formula, which lacks an analytic TT representation. To
construct its TT approximation numerically, we develop the new block TT cross
algorithm, a method that computes the whole TT decomposition from a few
evaluations of the PCE formula. The new method is conceptually similar to the
adaptive cross approximation in the TT format, but is more efficient when
several tensors must be stored in the same TT representation, which is the case
for the PCE. Besides, we demonstrate how to assemble the stochastic Galerkin
matrix and to compute the solution of the elliptic equation and its
post-processing, staying in the TT format.
We compare our technique with the traditional sparse polynomial chaos and the
Monte Carlo approaches. In the tensor product polynomial chaos, the polynomial
degree is bounded for each random variable independently. This provides higher
accuracy than the sparse polynomial set or the Monte Carlo method, but the
cardinality of the tensor product set grows exponentially with the number of
random variables. However, when the PCE coefficients are implicitly
approximated in the TT format, the computations with the full tensor product
polynomial set become possible. In the numerical experiments, we confirm that
the new methodology is competitive in a wide range of parameters, especially
where high accuracy and high polynomial degrees are required.Comment: This is a major revision of the manuscript arXiv:1406.2816 with
significantly extended numerical experiments. Some unused material is remove
Geometric methods on low-rank matrix and tensor manifolds
In this chapter we present numerical methods for low-rank matrix and tensor problems that explicitly make use of the geometry of rank constrained matrix and tensor spaces. We focus on two types of problems: The first are optimization problems, like matrix and tensor completion, solving linear systems and eigenvalue problems. Such problems can be solved by numerical optimization for manifolds, called Riemannian optimization methods. We will explain the basic elements of differential geometry in order to apply such methods efficiently to rank constrained matrix and tensor spaces. The second type of problem is ordinary differential equations, defined on matrix and tensor spaces. We show how their solution can be approximated by the dynamical low-rank principle, and discuss several numerical integrators that rely in an essential way on geometric properties that are characteristic to sets of low rank matrices and tensors
Low rank surrogates for polymorphic fields with application to fuzzy-stochastic partial differential equations
We consider a general form of fuzzy-stochastic PDEs depending on the interaction of probabilistic
and non-probabilistic ("possibilistic") influences. Such a combined modelling of aleatoric
and epistemic uncertainties for instance can be applied beneficially in an engineering context for
real-world applications, where probabilistic modelling and expert knowledge has to be accounted
for. We examine existence and well-definedness of polymorphic PDEs in appropriate function
spaces. The fuzzy-stochastic dependence is described in a high-dimensional parameter space,
thus easily leading to an exponential complexity in practical computations.
To aleviate this severe obstacle in practise, a compressed low-rank approximation of the problem
formulation and the solution is derived. This is based on the Hierarchical Tucker format which
is constructed with solution samples by a non-intrusive tensor reconstruction algorithm. The performance
of the proposed model order reduction approach is demonstrated with two examples.
One of these is the ubiquitous groundwater flow model with Karhunen-Loeve coefficient field
which is generalized by a fuzzy correlation length
Preconditioned low-rank Riemannian optimization for linear systems with tensor product structure
The numerical solution of partial differential equations on high-dimensional
domains gives rise to computationally challenging linear systems. When using
standard discretization techniques, the size of the linear system grows
exponentially with the number of dimensions, making the use of classic
iterative solvers infeasible. During the last few years, low-rank tensor
approaches have been developed that allow to mitigate this curse of
dimensionality by exploiting the underlying structure of the linear operator.
In this work, we focus on tensors represented in the Tucker and tensor train
formats. We propose two preconditioned gradient methods on the corresponding
low-rank tensor manifolds: A Riemannian version of the preconditioned
Richardson method as well as an approximate Newton scheme based on the
Riemannian Hessian. For the latter, considerable attention is given to the
efficient solution of the resulting Newton equation. In numerical experiments,
we compare the efficiency of our Riemannian algorithms with other established
tensor-based approaches such as a truncated preconditioned Richardson method
and the alternating linear scheme. The results show that our approximate
Riemannian Newton scheme is significantly faster in cases when the application
of the linear operator is expensive.Comment: 24 pages, 8 figure
Recommended from our members
New Discretization Methods for the Numerical Approximation of PDEs
The construction and mathematical analysis of numerical methods for PDEs is a fundamental area of modern applied mathematics. Among the various techniques that have been proposed in the past, some – in particular, finite element methods, – have been exceptionally successful in a range of applications. There are however a number of important challenges that remain, including the optimal adaptive finite element approximation of solutions to transport-dominated diffusion problems, the efficient numerical approximation of parametrized families of PDEs, and the efficient numerical approximation of high-dimensional partial differential equations (that arise from stochastic analysis and statistical physics, for example, in the form of a backward Kolmogorov equation, which, unlike its formal adjoint, the forward Kolmogorov equation, is not in divergence form, and therefore not directly amenable to finite element approximation, even when the spatial dimension is low). In recent years several original and conceptionally new ideas have emerged in order to tackle these open problems.
The goal of this workshop was to discuss and compare a number of novel approaches, to study their potential and applicability, and to formulate the strategic goals and directions of research in this field for the next five years
- …