17 research outputs found

    Fast truncation of mode ranks for bilinear tensor operations

    Full text link
    We propose a fast algorithm for mode rank truncation of the result of a bilinear operation on 3-tensors given in the Tucker or canonical form. If the arguments and the result have mode sizes n and mode ranks r, the computation costs O(nr3+r4)O(nr^3 + r^4). The algorithm is based on the cross approximation of Gram matrices, and the accuracy of the resulted Tucker approximation is limited by square root of machine precision.Comment: 9 pages, 2 tables. Submitted to Numerical Linear Algebra and Applications, special edition for ICSMT conference, Hong Kong, January 201

    Bridging the gap between quantum Monte Carlo and F12-methods

    Get PDF
    Abstract Tensor product approximation of pair-correlation functions opens a new route from quantum Monte Carlo (QMC) to explicitly correlated F12 methods. Thereby one benefits from stochastic optimization techniques used in QMC to get optimal pair-correlation functions which typically recover more than 85 % of the total correlation energy. Our approach incorporates, in particular, core and core-valence correlation which are poorly described by homogeneous and isotropic ansatz functions usually applied in F12 calculations. We demonstrate the performance of the tensor product approximation by applications to atoms and small molecules. It turns out that the canonical tensor format is especially suitable for the efficient computation of two-and three-electron integrals required by explicitly correlated methods. The algorithm uses a decomposition of three-electron integrals, originally introduced by Boys and Handy and further elaborated by Ten-no in his 3d numerical quadrature scheme, which enables efficient computations in the tensor format. Furthermore, our method includes the adaptive wavelet approximation of tensor components where convergence rates are given in the framework of best N -term approximation theory

    Efficient Analysis of High Dimensional Data in Tensor Formats

    Get PDF
    In this article we introduce new methods for the analysis of high dimensional data in tensor formats, where the underling data come from the stochastic elliptic boundary value problem. After discretisation of the deterministic operator as well as the presented random fields via KLE and PCE, the obtained high dimensional operator can be approximated via sums of elementary tensors. This tensors representation can be effectively used for computing different values of interest, such as maximum norm, level sets and cumulative distribution function. The basic concept of the data analysis in high dimensions is discussed on tensors represented in the canonical format, however the approach can be easily used in other tensor formats. As an intermediate step we describe efficient iterative algorithms for computing the characteristic and sign functions as well as pointwise inverse in the canonical tensor format. Since during majority of algebraic operations as well as during iteration steps the representation rank grows up, we use lower-rank approximation and inexact recursive iteration schemes

    Use of tensor formats in elliptic eigenvalue problems

    Full text link
    We investigate approximations by finite sums of products of functions with separated variables to eigenfunctions of certain class of elliptic operators in higher dimensions, and especially conditions providing an exponential decrease of the error with respect to the number of terms. The results of the consistent use of tensor formats can be regarded as a base for a new class of rank truncated iterative eigensolvers with almost linear complexity in the univariate problem size that improves dramatically the traditional methods of linear scaling in the volume size. Tensor methods can be applied to solving large scale spectral problems in the computational quantum chemistry, for example to the Schr¨odinger, Hartree-Fock and Kohn-Sham equations in electronic structure calculations. The results of numerical experiments clearly indicate the linear-logarithmic scaling of low-rank tensor method in the univariate problem size. The algorithms work equally well for the computation of both, minimal and maximal eigenvalues of the discrete elliptic operators
    corecore