406 research outputs found

    Spectral tensor-train decomposition

    Get PDF
    The accurate approximation of high-dimensional functions is an essential task in uncertainty quantification and many other fields. We propose a new function approximation scheme based on a spectral extension of the tensor-train (TT) decomposition. We first define a functional version of the TT decomposition and analyze its properties. We obtain results on the convergence of the decomposition, revealing links between the regularity of the function, the dimension of the input space, and the TT ranks. We also show that the regularity of the target function is preserved by the univariate functions (i.e., the "cores") comprising the functional TT decomposition. This result motivates an approximation scheme employing polynomial approximations of the cores. For functions with appropriate regularity, the resulting \textit{spectral tensor-train decomposition} combines the favorable dimension-scaling of the TT decomposition with the spectral convergence rate of polynomial approximations, yielding efficient and accurate surrogates for high-dimensional functions. To construct these decompositions, we use the sampling algorithm \texttt{TT-DMRG-cross} to obtain the TT decomposition of tensors resulting from suitable discretizations of the target function. We assess the performance of the method on a range of numerical examples: a modifed set of Genz functions with dimension up to 100100, and functions with mixed Fourier modes or with local features. We observe significant improvements in performance over an anisotropic adaptive Smolyak approach. The method is also used to approximate the solution of an elliptic PDE with random input data. The open source software and examples presented in this work are available online.Comment: 33 pages, 19 figure

    Tensor Product Approximation (DMRG) and Coupled Cluster method in Quantum Chemistry

    Full text link
    We present the Copupled Cluster (CC) method and the Density matrix Renormalization Grooup (DMRG) method in a unified way, from the perspective of recent developments in tensor product approximation. We present an introduction into recently developed hierarchical tensor representations, in particular tensor trains which are matrix product states in physics language. The discrete equations of full CI approximation applied to the electronic Schr\"odinger equation is casted into a tensorial framework in form of the second quantization. A further approximation is performed afterwards by tensor approximation within a hierarchical format or equivalently a tree tensor network. We establish the (differential) geometry of low rank hierarchical tensors and apply the Driac Frenkel principle to reduce the original high-dimensional problem to low dimensions. The DMRG algorithm is established as an optimization method in this format with alternating directional search. We briefly introduce the CC method and refer to our theoretical results. We compare this approach in the present discrete formulation with the CC method and its underlying exponential parametrization.Comment: 15 pages, 3 figure

    Low-rank approximation of continuous functions in Sobolev spaces with dominating mixed smoothness

    Get PDF
    Let Ωi⊂Rni\Omega_i\subset\mathbb{R}^{n_i}, i=1,…,mi=1,\ldots,m, be given domains. In this article, we study the low-rank approximation with respect to L2(Ω1×⋯×Ωm)L^2(\Omega_1\times\dots\times\Omega_m) of functions from Sobolev spaces with dominating mixed smoothness. To this end, we first estimate the rank of a bivariate approximation, i.e., the rank of the continuous singular value decomposition. In comparison to the case of functions from Sobolev spaces with isotropic smoothness, compare \cite{GH14,GH19}, we obtain improved results due to the additional mixed smoothness. This convergence result is then used to study the tensor train decomposition as a method to construct multivariate low-rank approximations of functions from Sobolev spaces with dominating mixed smoothness. We show that this approach is able to beat the curse of dimension

    Analysis of tensor approximation schemes for continuous functions

    Get PDF
    In this article, we analyze tensor approximation schemes for continuous functions. We assume that the function to be approximated lies in an isotropic Sobolev space and discuss the cost when approximating this function in the continuous analogue of the Tucker tensor format or of the tensor train format. We especially show that the cost of both approximations are dimension-robust when the Sobolev space under consideration provides appropriate dimension weight

    A literature survey of low-rank tensor approximation techniques

    Full text link
    During the last years, low-rank tensor approximation has been established as a new tool in scientific computing to address large-scale linear and multilinear algebra problems, which would be intractable by classical techniques. This survey attempts to give a literature overview of current developments in this area, with an emphasis on function-related tensors

    Approximation with Tensor Networks. Part II: Approximation Rates for Smoothness Classes

    Full text link
    We study the approximation by tensor networks (TNs) of functions from classical smoothness classes. The considered approximation tool combines a tensorization of functions in Lp([0,1))L^p([0,1)), which allows to identify a univariate function with a multivariate function (or tensor), and the use of tree tensor networks (the tensor train format) for exploiting low-rank structures of multivariate functions. The resulting tool can be interpreted as a feed-forward neural network, with first layers implementing the tensorization, interpreted as a particular featuring step, followed by a sum-product network with sparse architecture. In part I of this work, we presented several approximation classes associated with different measures of complexity of tensor networks and studied their properties. In this work (part II), we show how classical approximation tools, such as polynomials or splines (with fixed or free knots), can be encoded as a tensor network with controlled complexity. We use this to derive direct (Jackson) inequalities for the approximation spaces of tensor networks. This is then utilized to show that Besov spaces are continuously embedded into these approximation spaces. In other words, we show that arbitrary Besov functions can be approximated with optimal or near to optimal rate. We also show that an arbitrary function in the approximation class possesses no Besov smoothness, unless one limits the depth of the tensor network.Comment: For part I see arXiv:2007.00118, for part III see arXiv:2101.1193

    Adaptive Near-Optimal Rank Tensor Approximation for High-Dimensional Operator Equations

    Get PDF
    We consider a framework for the construction of iterative schemes for operator equations that combine low-rank approximation in tensor formats and adaptive approximation in a basis. Under fairly general assumptions, we obtain a rigorous convergence analysis, where all parameters required for the execution of the methods depend only on the underlying infinite-dimensional problem, but not on a concrete discretization. Under certain assumptions on the rates for the involved low-rank approximations and basis expansions, we can also give bounds on the computational complexity of the iteration as a function of the prescribed target error. Our theoretical findings are illustrated and supported by computational experiments. These demonstrate that problems in very high dimensions can be treated with controlled solution accuracy.Comment: 51 page
    • …
    corecore