3,725 research outputs found
Preconditioned low-rank Riemannian optimization for linear systems with tensor product structure
The numerical solution of partial differential equations on high-dimensional
domains gives rise to computationally challenging linear systems. When using
standard discretization techniques, the size of the linear system grows
exponentially with the number of dimensions, making the use of classic
iterative solvers infeasible. During the last few years, low-rank tensor
approaches have been developed that allow to mitigate this curse of
dimensionality by exploiting the underlying structure of the linear operator.
In this work, we focus on tensors represented in the Tucker and tensor train
formats. We propose two preconditioned gradient methods on the corresponding
low-rank tensor manifolds: A Riemannian version of the preconditioned
Richardson method as well as an approximate Newton scheme based on the
Riemannian Hessian. For the latter, considerable attention is given to the
efficient solution of the resulting Newton equation. In numerical experiments,
we compare the efficiency of our Riemannian algorithms with other established
tensor-based approaches such as a truncated preconditioned Richardson method
and the alternating linear scheme. The results show that our approximate
Riemannian Newton scheme is significantly faster in cases when the application
of the linear operator is expensive.Comment: 24 pages, 8 figure
Alternating least squares as moving subspace correction
In this note we take a new look at the local convergence of alternating
optimization methods for low-rank matrices and tensors. Our abstract
interpretation as sequential optimization on moving subspaces yields insightful
reformulations of some known convergence conditions that focus on the interplay
between the contractivity of classical multiplicative Schwarz methods with
overlapping subspaces and the curvature of low-rank matrix and tensor
manifolds. While the verification of the abstract conditions in concrete
scenarios remains open in most cases, we are able to provide an alternative and
conceptually simple derivation of the asymptotic convergence rate of the
two-sided block power method of numerical algebra for computing the dominant
singular subspaces of a rectangular matrix. This method is equivalent to an
alternating least squares method applied to a distance function. The
theoretical results are illustrated and validated by numerical experiments.Comment: 20 pages, 4 figure
A literature survey of low-rank tensor approximation techniques
During the last years, low-rank tensor approximation has been established as
a new tool in scientific computing to address large-scale linear and
multilinear algebra problems, which would be intractable by classical
techniques. This survey attempts to give a literature overview of current
developments in this area, with an emphasis on function-related tensors
Low-rank approximate inverse for preconditioning tensor-structured linear systems
In this paper, we propose an algorithm for the construction of low-rank
approximations of the inverse of an operator given in low-rank tensor format.
The construction relies on an updated greedy algorithm for the minimization of
a suitable distance to the inverse operator. It provides a sequence of
approximations that are defined as the projections of the inverse operator in
an increasing sequence of linear subspaces of operators. These subspaces are
obtained by the tensorization of bases of operators that are constructed from
successive rank-one corrections. In order to handle high-order tensors,
approximate projections are computed in low-rank Hierarchical Tucker subsets of
the successive subspaces of operators. Some desired properties such as symmetry
or sparsity can be imposed on the approximate inverse operator during the
correction step, where an optimal rank-one correction is searched as the tensor
product of operators with the desired properties. Numerical examples illustrate
the ability of this algorithm to provide efficient preconditioners for linear
systems in tensor format that improve the convergence of iterative solvers and
also the quality of the resulting low-rank approximations of the solution
Tensor Numerical Methods in Quantum Chemistry: from Hartree-Fock Energy to Excited States
We resume the recent successes of the grid-based tensor numerical methods and
discuss their prospects in real-space electronic structure calculations. These
methods, based on the low-rank representation of the multidimensional functions
and integral operators, led to entirely grid-based tensor-structured 3D
Hartree-Fock eigenvalue solver. It benefits from tensor calculation of the core
Hamiltonian and two-electron integrals (TEI) in complexity using
the rank-structured approximation of basis functions, electron densities and
convolution integral operators all represented on 3D
Cartesian grids. The algorithm for calculating TEI tensor in a form of the
Cholesky decomposition is based on multiple factorizations using algebraic 1D
``density fitting`` scheme. The basis functions are not restricted to separable
Gaussians, since the analytical integration is substituted by high-precision
tensor-structured numerical quadratures. The tensor approaches to
post-Hartree-Fock calculations for the MP2 energy correction and for the
Bethe-Salpeter excited states, based on using low-rank factorizations and the
reduced basis method, were recently introduced. Another direction is related to
the recent attempts to develop a tensor-based Hartree-Fock numerical scheme for
finite lattice-structured systems, where one of the numerical challenges is the
summation of electrostatic potentials of a large number of nuclei. The 3D
grid-based tensor method for calculation of a potential sum on a lattice manifests the linear in computational work, ,
instead of the usual scaling by the Ewald-type approaches
Tensor-based multiscale method for diffusion problems in quasi-periodic heterogeneous media
This paper proposes to address the issue of complexity reduction for the
numerical simulation of multiscale media in a quasi-periodic setting. We
consider a stationary elliptic diffusion equation defined on a domain such
that is the union of cells and we
introduce a two-scale representation by identifying any function defined
on with a bi-variate function , where relates to the
index of the cell containing the point and relates to a local
coordinate in a reference cell . We introduce a weak formulation of the
problem in a broken Sobolev space using a discontinuous Galerkin
framework. The problem is then interpreted as a tensor-structured equation by
identifying with a tensor product space of
functions defined over the product set . Tensor numerical methods
are then used in order to exploit approximability properties of quasi-periodic
solutions by low-rank tensors.Comment: Changed the choice of test spaces V(D) and X (with regard to
regularity) and the argumentation thereof. Corrected proof of proposition 3.
Corrected wrong multiplicative factor in proposition 4 and its proof (was 2
instead of 1). Added remark 6 at the end of section 2. Extended remark 7.
Added references. Some minor improvements (typos, typesetting
Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives
Part 2 of this monograph builds on the introduction to tensor networks and
their operations presented in Part 1. It focuses on tensor network models for
super-compressed higher-order representation of data/parameters and related
cost functions, while providing an outline of their applications in machine
learning and data analytics. A particular emphasis is on the tensor train (TT)
and Hierarchical Tucker (HT) decompositions, and their physically meaningful
interpretations which reflect the scalability of the tensor network approach.
Through a graphical approach, we also elucidate how, by virtue of the
underlying low-rank tensor approximations and sophisticated contractions of
core tensors, tensor networks have the ability to perform distributed
computations on otherwise prohibitively large volumes of data/parameters,
thereby alleviating or even eliminating the curse of dimensionality. The
usefulness of this concept is illustrated over a number of applied areas,
including generalized regression and classification (support tensor machines,
canonical correlation analysis, higher order partial least squares),
generalized eigenvalue decomposition, Riemannian optimization, and in the
optimization of deep neural networks. Part 1 and Part 2 of this work can be
used either as stand-alone separate texts, or indeed as a conjoint
comprehensive review of the exciting field of low-rank tensor networks and
tensor decompositions.Comment: 232 page
- …