589 research outputs found

    A tensor approximation method based on ideal minimal residual formulations for the solution of high-dimensional problems

    Full text link
    In this paper, we propose a method for the approximation of the solution of high-dimensional weakly coercive problems formulated in tensor spaces using low-rank approximation formats. The method can be seen as a perturbation of a minimal residual method with residual norm corresponding to the error in a specified solution norm. We introduce and analyze an iterative algorithm that is able to provide a controlled approximation of the optimal approximation of the solution in a given low-rank subset, without any a priori information on this solution. We also introduce a weak greedy algorithm which uses this perturbed minimal residual method for the computation of successive greedy corrections in small tensor subsets. We prove its convergence under some conditions on the parameters of the algorithm. The residual norm can be designed such that the resulting low-rank approximations are quasi-optimal with respect to particular norms of interest, thus yielding to goal-oriented order reduction strategies for the approximation of high-dimensional problems. The proposed numerical method is applied to the solution of a stochastic partial differential equation which is discretized using standard Galerkin methods in tensor product spaces

    A literature survey of low-rank tensor approximation techniques

    Full text link
    During the last years, low-rank tensor approximation has been established as a new tool in scientific computing to address large-scale linear and multilinear algebra problems, which would be intractable by classical techniques. This survey attempts to give a literature overview of current developments in this area, with an emphasis on function-related tensors

    Higher-order principal component analysis for the approximation of tensors in tree-based low-rank formats

    Full text link
    This paper is concerned with the approximation of tensors using tree-based tensor formats, which are tensor networks whose graphs are dimension partition trees. We consider Hilbert tensor spaces of multivariate functions defined on a product set equipped with a probability measure. This includes the case of multidimensional arrays corresponding to finite product sets. We propose and analyse an algorithm for the construction of an approximation using only point evaluations of a multivariate function, or evaluations of some entries of a multidimensional array. The algorithm is a variant of higher-order singular value decomposition which constructs a hierarchy of subspaces associated with the different nodes of the tree and a corresponding hierarchy of interpolation operators. Optimal subspaces are estimated using empirical principal component analysis of interpolations of partial random evaluations of the function. The algorithm is able to provide an approximation in any tree-based format with either a prescribed rank or a prescribed relative error, with a number of evaluations of the order of the storage complexity of the approximation format. Under some assumptions on the estimation of principal components, we prove that the algorithm provides either a quasi-optimal approximation with a given rank, or an approximation satisfying the prescribed relative error, up to constants depending on the tree and the properties of interpolation operators. The analysis takes into account the discretization errors for the approximation of infinite-dimensional tensors. Several numerical examples illustrate the main results and the behavior of the algorithm for the approximation of high-dimensional functions using hierarchical Tucker or tensor train tensor formats, and the approximation of univariate functions using tensorization

    Tensor Product Approximation (DMRG) and Coupled Cluster method in Quantum Chemistry

    Full text link
    We present the Copupled Cluster (CC) method and the Density matrix Renormalization Grooup (DMRG) method in a unified way, from the perspective of recent developments in tensor product approximation. We present an introduction into recently developed hierarchical tensor representations, in particular tensor trains which are matrix product states in physics language. The discrete equations of full CI approximation applied to the electronic Schr\"odinger equation is casted into a tensorial framework in form of the second quantization. A further approximation is performed afterwards by tensor approximation within a hierarchical format or equivalently a tree tensor network. We establish the (differential) geometry of low rank hierarchical tensors and apply the Driac Frenkel principle to reduce the original high-dimensional problem to low dimensions. The DMRG algorithm is established as an optimization method in this format with alternating directional search. We briefly introduce the CC method and refer to our theoretical results. We compare this approach in the present discrete formulation with the CC method and its underlying exponential parametrization.Comment: 15 pages, 3 figure

    On convergence of the maximum block improvement method

    Get PDF
    Abstract. The MBI (maximum block improvement) method is a greedy approach to solving optimization problems where the decision variables can be grouped into a finite number of blocks. Assuming that optimizing over one block of variables while fixing all others is relatively easy, the MBI method updates the block of variables corresponding to the maximally improving block at each iteration, which is arguably a most natural and simple process to tackle block-structured problems with great potentials for engineering applications. In this paper we establish global and local linear convergence results for this method. The global convergence is established under the Lojasiewicz inequality assumption, while the local analysis invokes second-order assumptions. We study in particular the tensor optimization model with spherical constraints. Conditions for linear convergence of the famous power method for computing the maximum eigenvalue of a matrix follow in this framework as a special case. The condition is interpreted in various other forms for the rank-one tensor optimization model under spherical constraints. Numerical experiments are shown to support the convergence property of the MBI method

    Adaptive learning of tensor network structures

    Full text link
    Les réseaux tensoriels offrent un cadre puissant pour représenter efficacement des objets de très haute dimension. Les réseaux tensoriels ont récemment montré leur potentiel pour les applications d’apprentissage automatique et offrent une vue unifiée des modèles de décomposition tensorielle courants tels que Tucker, tensor train (TT) et tensor ring (TR). Cependant, l’identification de la meilleure structure de réseau tensoriel à partir de données pour une tâche donnée est un défi. Dans cette thèse, nous nous appuyons sur le formalisme des réseaux tensoriels pour développer un algorithme adaptatif générique et efficace pour apprendre conjointement la structure et les paramètres d’un réseau de tenseurs à partir de données. Notre méthode est basée sur une approche simple de type gloutonne, partant d’un tenseur de rang un et identifiant successivement les bords du réseau tensoriel les plus prometteurs pour de petits incréments de rang. Notre algorithme peut identifier de manière adaptative des structures avec un petit nombre de paramètres qui optimisent efficacement toute fonction objective différentiable. Des expériences sur des tâches de décomposition de tenseurs, de complétion de tenseurs et de compression de modèles démontrent l’efficacité de l’algorithme proposé. En particulier, notre méthode surpasse l’état de l’art basée sur des algorithmes évolutionnaires introduit dans [26] pour la décomposition tensorielle d’images (tout en étant plusieurs ordres de grandeur plus rapide) et trouve des structures efficaces pour compresser les réseaux neuronaux en surpassant les approches populaires basées sur le format TT [30].Tensor Networks (TN) offer a powerful framework to efficiently represent very high-dimensional objects. TN have recently shown their potential for machine learning applications and offer a unifying view of common tensor decomposition models such as Tucker, tensor train (TT) and tensor ring (TR). However, identifying the best tensor network structure from data for a given task is challenging. In this thesis, we leverage the TN formalism to develop a generic and efficient adaptive algorithm to jointly learn the structure and the parameters of a TN from data. Our method is based on a simple greedy approach starting from a rank one tensor and successively identifying the most promising tensor network edges for small rank increments. Our algorithm can adaptively identify TN structures with small number of parameters that effectively optimize any differentiable objective function. Experiments on tensor decomposition, tensor completion and model compression tasks demonstrate the effectiveness of the proposed algorithm. In particular, our method outperforms the state-of-the- art evolutionary topology search introduced in [26] for tensor decomposition of images (while being orders of magnitude faster) and finds efficient structures to compress neural networks outperforming popular TT based approaches [30]

    Preconditioned low-rank Riemannian optimization for linear systems with tensor product structure

    Full text link
    The numerical solution of partial differential equations on high-dimensional domains gives rise to computationally challenging linear systems. When using standard discretization techniques, the size of the linear system grows exponentially with the number of dimensions, making the use of classic iterative solvers infeasible. During the last few years, low-rank tensor approaches have been developed that allow to mitigate this curse of dimensionality by exploiting the underlying structure of the linear operator. In this work, we focus on tensors represented in the Tucker and tensor train formats. We propose two preconditioned gradient methods on the corresponding low-rank tensor manifolds: A Riemannian version of the preconditioned Richardson method as well as an approximate Newton scheme based on the Riemannian Hessian. For the latter, considerable attention is given to the efficient solution of the resulting Newton equation. In numerical experiments, we compare the efficiency of our Riemannian algorithms with other established tensor-based approaches such as a truncated preconditioned Richardson method and the alternating linear scheme. The results show that our approximate Riemannian Newton scheme is significantly faster in cases when the application of the linear operator is expensive.Comment: 24 pages, 8 figure
    • …
    corecore