31 research outputs found

    Tensor Numerical Methods in Quantum Chemistry: from Hartree-Fock Energy to Excited States

    Get PDF
    We resume the recent successes of the grid-based tensor numerical methods and discuss their prospects in real-space electronic structure calculations. These methods, based on the low-rank representation of the multidimensional functions and integral operators, led to entirely grid-based tensor-structured 3D Hartree-Fock eigenvalue solver. It benefits from tensor calculation of the core Hamiltonian and two-electron integrals (TEI) in O(nlogn)O(n\log n) complexity using the rank-structured approximation of basis functions, electron densities and convolution integral operators all represented on 3D n×n×nn\times n\times n Cartesian grids. The algorithm for calculating TEI tensor in a form of the Cholesky decomposition is based on multiple factorizations using algebraic 1D ``density fitting`` scheme. The basis functions are not restricted to separable Gaussians, since the analytical integration is substituted by high-precision tensor-structured numerical quadratures. The tensor approaches to post-Hartree-Fock calculations for the MP2 energy correction and for the Bethe-Salpeter excited states, based on using low-rank factorizations and the reduced basis method, were recently introduced. Another direction is related to the recent attempts to develop a tensor-based Hartree-Fock numerical scheme for finite lattice-structured systems, where one of the numerical challenges is the summation of electrostatic potentials of a large number of nuclei. The 3D grid-based tensor method for calculation of a potential sum on a L×L×LL\times L\times L lattice manifests the linear in LL computational work, O(L)O(L), instead of the usual O(L3logL)O(L^3 \log L) scaling by the Ewald-type approaches

    A literature survey of low-rank tensor approximation techniques

    Full text link
    During the last years, low-rank tensor approximation has been established as a new tool in scientific computing to address large-scale linear and multilinear algebra problems, which would be intractable by classical techniques. This survey attempts to give a literature overview of current developments in this area, with an emphasis on function-related tensors

    Efficient computation of highly oscillatory integrals by using QTT tensor approximation

    Full text link
    We propose a new method for the efficient approximation of a class of highly oscillatory weighted integrals where the oscillatory function depends on the frequency parameter ω0\omega \geq 0, typically varying in a large interval. Our approach is based, for fixed but arbitrary oscillator, on the pre-computation and low-parametric approximation of certain ω\omega-dependent prototype functions whose evaluation leads in a straightforward way to recover the target integral. The difficulty that arises is that these prototype functions consist of oscillatory integrals and are itself oscillatory which makes them both difficult to evaluate and to approximate. Here we use the quantized-tensor train (QTT) approximation method for functional mm-vectors of logarithmic complexity in mm in combination with a cross-approximation scheme for TT tensors. This allows the accurate approximation and efficient storage of these functions in the wide range of grid and frequency parameters. Numerical examples illustrate the efficiency of the QTT-based numerical integration scheme on various examples in one and several spatial dimensions.Comment: 20 page

    On identification of self-similar characteristics using the Tensor Train decomposition method with application to channel turbulence flow

    Get PDF
    A study on the application of the Tensor Train decomposition method to 3D direct numerical simulation data of channel turbulence flow is presented. The approach is validated with respect to compression rate and storage requirement. In tests with synthetic data, it is found that grid-aligned self-similar patterns are well captured, and also the application to non grid-aligned self-similarity yields satisfying results. It is observed that the shape of the input Tensor significantly affects the compression rate. Applied to data of channel turbulent flow, the Tensor Train format allows for surprisingly high compression rates whilst ensuring low relative errors

    TT-NF: Tensor Train Neural Fields

    Full text link
    Learning neural fields has been an active topic in deep learning research, focusing, among other issues, on finding more compact and easy-to-fit representations. In this paper, we introduce a novel low-rank representation termed Tensor Train Neural Fields (TT-NF) for learning neural fields on dense regular grids and efficient methods for sampling from them. Our representation is a TT parameterization of the neural field, trained with backpropagation to minimize a non-convex objective. We analyze the effect of low-rank compression on the downstream task quality metrics in two settings. First, we demonstrate the efficiency of our method in a sandbox task of tensor denoising, which admits comparison with SVD-based schemes designed to minimize reconstruction error. Furthermore, we apply the proposed approach to Neural Radiance Fields, where the low-rank structure of the field corresponding to the best quality can be discovered only through learning.Comment: Preprint, under revie
    corecore