1,536 research outputs found

    Fast multidimensional convolution in low-rank formats via cross approximation

    Full text link
    We propose a new cross-conv algorithm for approximate computation of convolution in different low-rank tensor formats (tensor train, Tucker, Hierarchical Tucker). It has better complexity with respect to the tensor rank than previous approaches. The new algorithm has a high potential impact in different applications. The key idea is based on applying cross approximation in the "frequency domain", where convolution becomes a simple elementwise product. We illustrate efficiency of our algorithm by computing the three-dimensional Newton potential and by presenting preliminary results for solution of the Hartree-Fock equation on tensor-product grids.Comment: 14 pages, 2 figure

    FFT-based Kronecker product approximation to micromagnetic long-range interactions

    Full text link
    We derive a Kronecker product approximation for the micromagnetic long range interactions in a collocation framework by means of separable sinc quadrature. Evaluation of this operator for structured tensors (Canonical format, Tucker format, Tensor Trains) scales below linear in the volume size. Based on efficient usage of FFT for structured tensors, we are able to accelerate computations to quasi linear complexity in the number of collocation points used in one dimension. Quadratic convergence of the underlying collocation scheme as well as exponential convergence in the separation rank of the approximations is proved. Numerical experiments on accuracy and complexity confirm the theoretical results.Comment: 4 figure

    Vico-Greengard-Ferrando quadratures in the tensor solver for integral equations

    Full text link
    Convolution with Green's function of a differential operator appears in a lot of applications e.g. Lippmann-Schwinger integral equation. Algorithms for computing such are usually non-trivial and require non-uniform mesh. However, recently Vico, Greengard and Ferrando developed method for computing convolution with smooth functions with compact support with spectral accuracy, requiring nothing more than Fast Fourier Transform (FFT). Their approach is very suitable for the low-rank tensor implementation which we develop using Quantized Tensor Train (QTT) decomposition

    Memory footprint reduction for the FFT-based volume integral equation method via tensor decompositions

    Full text link
    We present a method of memory footprint reduction for FFT-based, electromagnetic (EM) volume integral equation (VIE) formulations. The arising Green's function tensors have low multilinear rank, which allows Tucker decomposition to be employed for their compression, thereby greatly reducing the required memory storage for numerical simulations. Consequently, the compressed components are able to fit inside a graphical processing unit (GPU) on which highly parallelized computations can vastly accelerate the iterative solution of the arising linear system. In addition, the element-wise products throughout the iterative solver's process require additional flops, thus, we provide a variety of novel and efficient methods that maintain the linear complexity of the classic element-wise product with an additional multiplicative small constant. We demonstrate the utility of our approach via its application to VIE simulations for the Magnetic Resonance Imaging (MRI) of a human head. For these simulations we report an order of magnitude acceleration over standard techniques.Comment: 11 pages, 10 figures, 5 tables, 2 algorithms, journa

    Range-separated tensor formats for numerical modeling of many-particle interaction potentials

    Full text link
    We introduce and analyze the new range-separated (RS) canonical/Tucker tensor format which aims for numerical modeling of the 3D long-range interaction potentials in multi-particle systems. The main idea of the RS tensor format is the independent grid-based low-rank representation of the localized and global parts in the target tensor which allows the efficient numerical approximation of NN-particle interaction potentials. The single-particle reference potential like 1/x1/\|x\| is split into a sum of localized and long-range low-rank canonical tensors represented on a fine 3D n×n×nn\times n\times n Cartesian grid. The smoothed long-range contribution to the total potential sum is represented on the 3D grid in O(n)O(n) storage via the low-rank canonical/Tucker tensor. We prove that the Tucker rank parameters depend only logarithmically on the number of particles NN and the grid-size nn. Agglomeration of the short range part in the sum is reduced to an independent treatment of NN localized terms with almost disjoint effective supports, calculated in O(N)O(N) operations. Thus, the cumulated sum of short range clusters is parametrized by a single low-rank canonical reference tensor with a local support, accomplished by a list of particle coordinates and their charges. The RS canonical/Tucker tensor representations reduce the cost of multi-linear algebraic operations on the 3D potential sums arising in modeling of multi-dimensional data by radial basis functions, say, in computation of the electrostatic potential of a protein, in 3D integration and convolution transforms, computation of gradients, forces and the interaction energy of a many-particle systems, and in low parametric fitting of multi-dimensional scattered data by reducing all of them to 1D calculations.Comment: 39 pages, 27 figure

    Kriging in Tensor Train data format

    Full text link
    Combination of low-tensor rank techniques and the Fast Fourier transform (FFT) based methods had turned out to be prominent in accelerating various statistical operations such as Kriging, computing conditional covariance, geostatistical optimal design, and others. However, the approximation of a full tensor by its low-rank format can be computationally formidable. In this work, we incorporate the robust Tensor Train (TT) approximation of covariance matrices and the efficient TT-Cross algorithm into the FFT-based Kriging. It is shown that here the computational complexity of Kriging is reduced to O(dr3n)\mathcal{O}(d r^3 n), where nn is the mode size of the estimation grid, dd is the number of variables (the dimension), and rr is the rank of the TT approximation of the covariance matrix. For many popular covariance functions the TT rank rr remains stable for increasing nn and dd. The advantages of this approach against those using plain FFT are demonstrated in synthetic and real data examples.Comment: 19 pages,4 figures, 1 table, UNCECOMP 2019 3rd International Conference on Uncertainty Quantification in Computational Sciences and Engineering 24-26 June 2019, Crete, Greece https://2019.uncecomp.org

    Tensor Numerical Methods for High-dimensional PDEs: Basic Theory and Initial Applications

    Full text link
    We present a brief survey on the modern tensor numerical methods for multidimensional stationary and time-dependent partial differential equations (PDEs). The guiding principle of the tensor approach is the rank-structured separable approximation of multivariate functions and operators represented on a grid. Recently, the traditional Tucker, canonical, and matrix product states (tensor train) tensor models have been applied to the grid-based electronic structure calculations, to parametric PDEs, and to dynamical equations arising in scientific computing. The essential progress is based on the quantics tensor approximation method proved to be capable to represent (approximate) function related dd-dimensional data arrays of size NdN^d with log-volume complexity, O(dlogN)O(d \log N). Combined with the traditional numerical schemes, these novel tools establish a new promising approach for solving multidimensional integral and differential equations using low-parametric rank-structured tensor formats. As the main example, we describe the grid-based tensor numerical approach for solving the 3D nonlinear Hartree-Fock eigenvalue problem, that was the starting point for the developments of tensor-structured numerical methods for large-scale computations in solving real-life multidimensional problems. We also address new results on tensor approximation of the dynamical Fokker-Planck and master equations in many dimensions up to d=20d=20. Numerical tests demonstrate the benefits of the rank-structured tensor approximation on the aforementioned examples of multidimensional PDEs. In particular, the use of grid-based tensor representations in the reduced basis of atomics orbitals yields an accurate solution of the Hartree-Fock equation on large N×N×NN\times N \times N grids with a grid size of up to N=105N= 10^{5}

    Tucker Tensor analysis of Matern functions in spatial statistics

    Full text link
    In this work, we describe advanced numerical tools for working with multivariate functions and for the analysis of large data sets. These tools will drastically reduce the required computing time and the storage cost, and, therefore, will allow us to consider much larger data sets or finer meshes. Covariance matrices are crucial in spatio-temporal statistical tasks, but are often very expensive to compute and store, especially in 3D. Therefore, we approximate covariance functions by cheap surrogates in a low-rank tensor format. We apply the Tucker and canonical tensor decompositions to a family of Matern- and Slater-type functions with varying parameters and demonstrate numerically that their approximations exhibit exponentially fast convergence. We prove the exponential convergence of the Tucker and canonical approximations in tensor rank parameters. Several statistical operations are performed in this low-rank tensor format, including evaluating the conditional covariance matrix, spatially averaged estimation variance, computing a quadratic form, determinant, trace, loglikelihood, inverse, and Cholesky decomposition of a large covariance matrix. Low-rank tensor approximations reduce the computing and storage costs essentially. For example, the storage cost is reduced from an exponential O(nd)\mathcal{O}(n^d) to a linear scaling O(drn)\mathcal{O}(drn), where dd is the spatial dimension, nn is the number of mesh points in one direction, and rr is the tensor rank. Prerequisites for applicability of the proposed techniques are the assumptions that the data, locations, and measurements lie on a tensor (axes-parallel) grid and that the covariance function depends on a distance, xy\Vert x-y \Vert.Comment: 23 pages, 2 diagrams, 2 tables, 9 figure

    Low-Rank Tucker Approximation of a Tensor From Streaming Data

    Full text link
    This paper describes a new algorithm for computing a low-Tucker-rank approximation of a tensor. The method applies a randomized linear map to the tensor to obtain a sketch that captures the important directions within each mode, as well as the interactions among the modes. The sketch can be extracted from streaming or distributed data or with a single pass over the tensor, and it uses storage proportional to the degrees of freedom in the output Tucker approximation. The algorithm does not require a second pass over the tensor, although it can exploit another view to compute a superior approximation. The paper provides a rigorous theoretical guarantee on the approximation error. Extensive numerical experiments show that that the algorithm produces useful results that improve on the state of the art for streaming Tucker decomposition.Comment: 34 pages, 14 figure

    Regularized Computation of Approximate Pseudoinverse of Large Matrices Using Low-Rank Tensor Train Decompositions

    Full text link
    We propose a new method for low-rank approximation of Moore-Penrose pseudoinverses (MPPs) of large-scale matrices using tensor networks. The computed pseudoinverses can be useful for solving or preconditioning of large-scale overdetermined or underdetermined systems of linear equations. The computation is performed efficiently and stably based on the modified alternating least squares (MALS) scheme using low-rank tensor train (TT) decompositions and tensor network contractions. The formulated large-scale optimization problem is reduced to sequential smaller-scale problems for which any standard and stable algorithms can be applied. Regularization technique is incorporated in order to alleviate ill-posedness and obtain robust low-rank approximations. Numerical simulation results illustrate that the regularized pseudoinverses of a wide class of non-square or nonsymmetric matrices admit good approximate low-rank TT representations. Moreover, we demonstrated that the computational cost of the proposed method is only logarithmic in the matrix size given that the TT-ranks of a data matrix and its approximate pseudoinverse are bounded. It is illustrated that a strongly nonsymmetric convection-diffusion problem can be efficiently solved by using the preconditioners computed by the proposed method.Comment: 28 page
    corecore