15,867 research outputs found
Low-rank approximate inverse for preconditioning tensor-structured linear systems
In this paper, we propose an algorithm for the construction of low-rank
approximations of the inverse of an operator given in low-rank tensor format.
The construction relies on an updated greedy algorithm for the minimization of
a suitable distance to the inverse operator. It provides a sequence of
approximations that are defined as the projections of the inverse operator in
an increasing sequence of linear subspaces of operators. These subspaces are
obtained by the tensorization of bases of operators that are constructed from
successive rank-one corrections. In order to handle high-order tensors,
approximate projections are computed in low-rank Hierarchical Tucker subsets of
the successive subspaces of operators. Some desired properties such as symmetry
or sparsity can be imposed on the approximate inverse operator during the
correction step, where an optimal rank-one correction is searched as the tensor
product of operators with the desired properties. Numerical examples illustrate
the ability of this algorithm to provide efficient preconditioners for linear
systems in tensor format that improve the convergence of iterative solvers and
also the quality of the resulting low-rank approximations of the solution
Recommended from our members
A modeling framework for efficient reduced order simulations of parametrized lithium-ion battery cells
In this contribution we present a new modeling and simulation framework for parametrized Lithium-ion battery cells. We first derive a new continuum model for a rather general intercalation battery cell on the basis of non-equilibrium thermodynamics. In order to efficiently evaluate the resulting parameterized non-linear system of partial differential equations the reduced basis method is employed. The reduced basis method is a model order reduction technique on the basis of an incremental hierarchical approximate proper orthogonal decomposition approach and empirical operator interpolation. The modeling framework is particularly well suited to investigate and quantify degradation effects of battery cells. Several numerical experiments are given to demonstrate the scope and efficiency of the modeling framework
Accelerated construction of projection-based reduced-order models via incremental approaches
We present an accelerated greedy strategy for training of projection-based
reduced-order models for parametric steady and unsteady partial differential
equations. Our approach exploits hierarchical approximate proper orthogonal
decomposition to speed up the construction of the empirical test space for
least-square Petrov-Galerkin formulations, a progressive construction of the
empirical quadrature rule based on a warm start of the non-negative
least-square algorithm, and a two-fidelity sampling strategy to reduce the
number of expensive greedy iterations. We illustrate the performance of our
method for two test cases: a two-dimensional compressible inviscid flow past a
LS89 blade at moderate Mach number, and a three-dimensional nonlinear mechanics
problem to predict the long-time structural response of the standard section of
a nuclear containment building under external loading
Spectral tensor-train decomposition
The accurate approximation of high-dimensional functions is an essential task
in uncertainty quantification and many other fields. We propose a new function
approximation scheme based on a spectral extension of the tensor-train (TT)
decomposition. We first define a functional version of the TT decomposition and
analyze its properties. We obtain results on the convergence of the
decomposition, revealing links between the regularity of the function, the
dimension of the input space, and the TT ranks. We also show that the
regularity of the target function is preserved by the univariate functions
(i.e., the "cores") comprising the functional TT decomposition. This result
motivates an approximation scheme employing polynomial approximations of the
cores. For functions with appropriate regularity, the resulting
\textit{spectral tensor-train decomposition} combines the favorable
dimension-scaling of the TT decomposition with the spectral convergence rate of
polynomial approximations, yielding efficient and accurate surrogates for
high-dimensional functions. To construct these decompositions, we use the
sampling algorithm \texttt{TT-DMRG-cross} to obtain the TT decomposition of
tensors resulting from suitable discretizations of the target function. We
assess the performance of the method on a range of numerical examples: a
modifed set of Genz functions with dimension up to , and functions with
mixed Fourier modes or with local features. We observe significant improvements
in performance over an anisotropic adaptive Smolyak approach. The method is
also used to approximate the solution of an elliptic PDE with random input
data. The open source software and examples presented in this work are
available online.Comment: 33 pages, 19 figure
Comparison of some Reduced Representation Approximations
In the field of numerical approximation, specialists considering highly
complex problems have recently proposed various ways to simplify their
underlying problems. In this field, depending on the problem they were tackling
and the community that are at work, different approaches have been developed
with some success and have even gained some maturity, the applications can now
be applied to information analysis or for numerical simulation of PDE's. At
this point, a crossed analysis and effort for understanding the similarities
and the differences between these approaches that found their starting points
in different backgrounds is of interest. It is the purpose of this paper to
contribute to this effort by comparing some constructive reduced
representations of complex functions. We present here in full details the
Adaptive Cross Approximation (ACA) and the Empirical Interpolation Method (EIM)
together with other approaches that enter in the same category
- …