4 research outputs found

    Efficient computation of highly oscillatory integrals by using QTT tensor approximation

    Full text link
    We propose a new method for the efficient approximation of a class of highly oscillatory weighted integrals where the oscillatory function depends on the frequency parameter ω0\omega \geq 0, typically varying in a large interval. Our approach is based, for fixed but arbitrary oscillator, on the pre-computation and low-parametric approximation of certain ω\omega-dependent prototype functions whose evaluation leads in a straightforward way to recover the target integral. The difficulty that arises is that these prototype functions consist of oscillatory integrals and are itself oscillatory which makes them both difficult to evaluate and to approximate. Here we use the quantized-tensor train (QTT) approximation method for functional mm-vectors of logarithmic complexity in mm in combination with a cross-approximation scheme for TT tensors. This allows the accurate approximation and efficient storage of these functions in the wide range of grid and frequency parameters. Numerical examples illustrate the efficiency of the QTT-based numerical integration scheme on various examples in one and several spatial dimensions.Comment: 20 page

    Computing the density of states for optical spectra by low-rank and QTT tensor approximation

    Get PDF
    In this paper, we introduce a new interpolation scheme to approximate the density of states (DOS) for a class of rank-structured matrices with application to the Tamm-Dancoff approximation (TDA) of the Bethe-Salpeter equation (BSE). The presented approach for approximating the DOS is based on two main techniques. First, we propose an economical method for calculating the traces of parametric matrix resolvents at interpolation points by taking advantage of the block-diagonal plus low-rank matrix structure described in [6, 3] for the BSE/TDA problem. Second, we show that a regularized or smoothed DOS discretized on a fine grid of size NN can be accurately represented by a low rank quantized tensor train (QTT) tensor that can be determined through a least squares fitting procedure. The latter provides good approximation properties for strictly oscillating DOS functions with multiple gaps, and requires asymptotically much fewer (O(logN)O(\log N)) functional calls compared with the full grid size NN. This approach allows us to overcome the computational difficulties of the traditional schemes by avoiding both the need of stochastic sampling and interpolation by problem independent functions like polynomials etc. Numerical tests indicate that the QTT approach yields accurate recovery of DOS associated with problems that contain relatively large spectral gaps. The QTT tensor rank only weakly depends on the size of a molecular system which paves the way for treating large-scale spectral problems.Comment: 26 pages, 25 figure

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page
    corecore