36 research outputs found

    A literature survey of low-rank tensor approximation techniques

    Full text link
    During the last years, low-rank tensor approximation has been established as a new tool in scientific computing to address large-scale linear and multilinear algebra problems, which would be intractable by classical techniques. This survey attempts to give a literature overview of current developments in this area, with an emphasis on function-related tensors

    Tensor Networks for Big Data Analytics and Large-Scale Optimization Problems

    Full text link
    In this paper we review basic and emerging models and associated algorithms for large-scale tensor networks, especially Tensor Train (TT) decompositions using novel mathematical and graphical representations. We discus the concept of tensorization (i.e., creating very high-order tensors from lower-order original data) and super compression of data achieved via quantized tensor train (QTT) networks. The purpose of a tensorization and quantization is to achieve, via low-rank tensor approximations "super" compression, and meaningful, compact representation of structured data. The main objective of this paper is to show how tensor networks can be used to solve a wide class of big data optimization problems (that are far from tractable by classical numerical methods) by applying tensorization and performing all operations using relatively small size matrices and tensors and applying iteratively optimized and approximative tensor contractions. Keywords: Tensor networks, tensor train (TT) decompositions, matrix product states (MPS), matrix product operators (MPO), basic tensor operations, tensorization, distributed representation od data optimization problems for very large-scale problems: generalized eigenvalue decomposition (GEVD), PCA/SVD, canonical correlation analysis (CCA).Comment: arXiv admin note: text overlap with arXiv:1403.204

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    Block Circulant and Toeplitz Structures in the Linearized Hartree–Fock Equation on Finite Lattices: Tensor Approach

    Get PDF
    This paper introduces and analyses the new grid-based tensor approach to approximate solution of the elliptic eigenvalue problem for the 3D lattice-structured systems. We consider the linearized Hartree-Fock equation over a spatial L1×L2×L3L_1\times L_2\times L_3 lattice for both periodic and non-periodic problem setting, discretized in the localized Gaussian-type orbitals basis. In the periodic case, the Galerkin system matrix obeys a three-level block-circulant structure that allows the FFT-based diagonalization, while for the finite extended systems in a box (Dirichlet boundary conditions) we arrive at the perturbed block-Toeplitz representation providing fast matrix-vector multiplication and low storage size. The proposed grid-based tensor techniques manifest the twofold benefits: (a) the entries of the Fock matrix are computed by 1D operations using low-rank tensors represented on a 3D grid, (b) in the periodic case the low-rank tensor structure in the diagonal blocks of the Fock matrix in the Fourier space reduces the conventional 3D FFT to the product of 1D FFTs. Lattice type systems in a box with Dirichlet boundary conditions are treated numerically by our previous tensor solver for single molecules, which makes possible calculations on rather large L1×L2×L3L_1\times L_2\times L_3 lattices due to reduced numerical cost for 3D problems. The numerical simulations for both box-type and periodic L×1×1L\times 1\times 1 lattice chain in a 3D rectangular "tube" with LL up to several hundred confirm the theoretical complexity bounds for the block-structured eigenvalue solvers in the limit of large LL.Comment: 30 pages, 12 figures. arXiv admin note: substantial text overlap with arXiv:1408.383
    corecore