3,410 research outputs found
Tensor Numerical Methods in Quantum Chemistry: from Hartree-Fock Energy to Excited States
We resume the recent successes of the grid-based tensor numerical methods and
discuss their prospects in real-space electronic structure calculations. These
methods, based on the low-rank representation of the multidimensional functions
and integral operators, led to entirely grid-based tensor-structured 3D
Hartree-Fock eigenvalue solver. It benefits from tensor calculation of the core
Hamiltonian and two-electron integrals (TEI) in complexity using
the rank-structured approximation of basis functions, electron densities and
convolution integral operators all represented on 3D
Cartesian grids. The algorithm for calculating TEI tensor in a form of the
Cholesky decomposition is based on multiple factorizations using algebraic 1D
``density fitting`` scheme. The basis functions are not restricted to separable
Gaussians, since the analytical integration is substituted by high-precision
tensor-structured numerical quadratures. The tensor approaches to
post-Hartree-Fock calculations for the MP2 energy correction and for the
Bethe-Salpeter excited states, based on using low-rank factorizations and the
reduced basis method, were recently introduced. Another direction is related to
the recent attempts to develop a tensor-based Hartree-Fock numerical scheme for
finite lattice-structured systems, where one of the numerical challenges is the
summation of electrostatic potentials of a large number of nuclei. The 3D
grid-based tensor method for calculation of a potential sum on a lattice manifests the linear in computational work, ,
instead of the usual scaling by the Ewald-type approaches
A literature survey of low-rank tensor approximation techniques
During the last years, low-rank tensor approximation has been established as
a new tool in scientific computing to address large-scale linear and
multilinear algebra problems, which would be intractable by classical
techniques. This survey attempts to give a literature overview of current
developments in this area, with an emphasis on function-related tensors
Tensor Networks for Big Data Analytics and Large-Scale Optimization Problems
In this paper we review basic and emerging models and associated algorithms
for large-scale tensor networks, especially Tensor Train (TT) decompositions
using novel mathematical and graphical representations. We discus the concept
of tensorization (i.e., creating very high-order tensors from lower-order
original data) and super compression of data achieved via quantized tensor
train (QTT) networks. The purpose of a tensorization and quantization is to
achieve, via low-rank tensor approximations "super" compression, and
meaningful, compact representation of structured data. The main objective of
this paper is to show how tensor networks can be used to solve a wide class of
big data optimization problems (that are far from tractable by classical
numerical methods) by applying tensorization and performing all operations
using relatively small size matrices and tensors and applying iteratively
optimized and approximative tensor contractions.
Keywords: Tensor networks, tensor train (TT) decompositions, matrix product
states (MPS), matrix product operators (MPO), basic tensor operations,
tensorization, distributed representation od data optimization problems for
very large-scale problems: generalized eigenvalue decomposition (GEVD),
PCA/SVD, canonical correlation analysis (CCA).Comment: arXiv admin note: text overlap with arXiv:1403.204
A Riemannian Trust Region Method for the Canonical Tensor Rank Approximation Problem
The canonical tensor rank approximation problem (TAP) consists of
approximating a real-valued tensor by one of low canonical rank, which is a
challenging non-linear, non-convex, constrained optimization problem, where the
constraint set forms a non-smooth semi-algebraic set. We introduce a Riemannian
Gauss-Newton method with trust region for solving small-scale, dense TAPs. The
novelty of our approach is threefold. First, we parametrize the constraint set
as the Cartesian product of Segre manifolds, hereby formulating the TAP as a
Riemannian optimization problem, and we argue why this parametrization is among
the theoretically best possible. Second, an original ST-HOSVD-based retraction
operator is proposed. Third, we introduce a hot restart mechanism that
efficiently detects when the optimization process is tending to an
ill-conditioned tensor rank decomposition and which often yields a quick escape
path from such spurious decompositions. Numerical experiments show improvements
of up to three orders of magnitude in terms of the expected time to compute a
successful solution over existing state-of-the-art methods
Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives
Part 2 of this monograph builds on the introduction to tensor networks and
their operations presented in Part 1. It focuses on tensor network models for
super-compressed higher-order representation of data/parameters and related
cost functions, while providing an outline of their applications in machine
learning and data analytics. A particular emphasis is on the tensor train (TT)
and Hierarchical Tucker (HT) decompositions, and their physically meaningful
interpretations which reflect the scalability of the tensor network approach.
Through a graphical approach, we also elucidate how, by virtue of the
underlying low-rank tensor approximations and sophisticated contractions of
core tensors, tensor networks have the ability to perform distributed
computations on otherwise prohibitively large volumes of data/parameters,
thereby alleviating or even eliminating the curse of dimensionality. The
usefulness of this concept is illustrated over a number of applied areas,
including generalized regression and classification (support tensor machines,
canonical correlation analysis, higher order partial least squares),
generalized eigenvalue decomposition, Riemannian optimization, and in the
optimization of deep neural networks. Part 1 and Part 2 of this work can be
used either as stand-alone separate texts, or indeed as a conjoint
comprehensive review of the exciting field of low-rank tensor networks and
tensor decompositions.Comment: 232 page
Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives
Part 2 of this monograph builds on the introduction to tensor networks and
their operations presented in Part 1. It focuses on tensor network models for
super-compressed higher-order representation of data/parameters and related
cost functions, while providing an outline of their applications in machine
learning and data analytics. A particular emphasis is on the tensor train (TT)
and Hierarchical Tucker (HT) decompositions, and their physically meaningful
interpretations which reflect the scalability of the tensor network approach.
Through a graphical approach, we also elucidate how, by virtue of the
underlying low-rank tensor approximations and sophisticated contractions of
core tensors, tensor networks have the ability to perform distributed
computations on otherwise prohibitively large volumes of data/parameters,
thereby alleviating or even eliminating the curse of dimensionality. The
usefulness of this concept is illustrated over a number of applied areas,
including generalized regression and classification (support tensor machines,
canonical correlation analysis, higher order partial least squares),
generalized eigenvalue decomposition, Riemannian optimization, and in the
optimization of deep neural networks. Part 1 and Part 2 of this work can be
used either as stand-alone separate texts, or indeed as a conjoint
comprehensive review of the exciting field of low-rank tensor networks and
tensor decompositions.Comment: 232 page
- …