1,054 research outputs found
Low-rank approximate inverse for preconditioning tensor-structured linear systems
In this paper, we propose an algorithm for the construction of low-rank
approximations of the inverse of an operator given in low-rank tensor format.
The construction relies on an updated greedy algorithm for the minimization of
a suitable distance to the inverse operator. It provides a sequence of
approximations that are defined as the projections of the inverse operator in
an increasing sequence of linear subspaces of operators. These subspaces are
obtained by the tensorization of bases of operators that are constructed from
successive rank-one corrections. In order to handle high-order tensors,
approximate projections are computed in low-rank Hierarchical Tucker subsets of
the successive subspaces of operators. Some desired properties such as symmetry
or sparsity can be imposed on the approximate inverse operator during the
correction step, where an optimal rank-one correction is searched as the tensor
product of operators with the desired properties. Numerical examples illustrate
the ability of this algorithm to provide efficient preconditioners for linear
systems in tensor format that improve the convergence of iterative solvers and
also the quality of the resulting low-rank approximations of the solution
Preconditioned low-rank Riemannian optimization for linear systems with tensor product structure
The numerical solution of partial differential equations on high-dimensional
domains gives rise to computationally challenging linear systems. When using
standard discretization techniques, the size of the linear system grows
exponentially with the number of dimensions, making the use of classic
iterative solvers infeasible. During the last few years, low-rank tensor
approaches have been developed that allow to mitigate this curse of
dimensionality by exploiting the underlying structure of the linear operator.
In this work, we focus on tensors represented in the Tucker and tensor train
formats. We propose two preconditioned gradient methods on the corresponding
low-rank tensor manifolds: A Riemannian version of the preconditioned
Richardson method as well as an approximate Newton scheme based on the
Riemannian Hessian. For the latter, considerable attention is given to the
efficient solution of the resulting Newton equation. In numerical experiments,
we compare the efficiency of our Riemannian algorithms with other established
tensor-based approaches such as a truncated preconditioned Richardson method
and the alternating linear scheme. The results show that our approximate
Riemannian Newton scheme is significantly faster in cases when the application
of the linear operator is expensive.Comment: 24 pages, 8 figure
Preconditioning techniques for generalized Sylvester matrix equations
Sylvester matrix equations are ubiquitous in scientific computing. However,
few solution techniques exist for their generalized multiterm version, as they
now arise in an increasingly large number of applications. In this work, we
consider algebraic parameter-free preconditioning techniques for the iterative
solution of generalized multiterm Sylvester equations. They consist in
constructing low Kronecker rank approximations of either the operator itself or
its inverse. While the former requires solving standard Sylvester equations in
each iteration, the latter only requires matrix-matrix multiplications, which
are highly optimized on modern computer architectures. Moreover, low Kronecker
rank approximate inverses can be easily combined with sparse approximate
inverse techniques, thereby enhancing their performance with little or no
damage to their effectiveness.Comment: 26 pages, 3 figures, 2 tables. Submitted manuscrip
- …