246 research outputs found

    Tensor and Matrix Inversions with Applications

    Full text link
    Higher order tensor inversion is possible for even order. We have shown that a tensor group endowed with the Einstein (contracted) product is isomorphic to the general linear group of degree nn. With the isomorphic group structures, we derived new tensor decompositions which we have shown to be related to the well-known canonical polyadic decomposition and multilinear SVD. Moreover, within this group structure framework, multilinear systems are derived, specifically, for solving high dimensional PDEs and large discrete quantum models. We also address multilinear systems which do not fit the framework in the least-squares sense, that is, when the tensor has an odd number of modes or when the tensor has distinct dimensions in each modes. With the notion of tensor inversion, multilinear systems are solvable. Numerically we solve multilinear systems using iterative techniques, namely biconjugate gradient and Jacobi methods in tensor format

    Symmetric Tensor Decomposition by an Iterative Eigendecomposition Algorithm

    Get PDF
    We present an iterative algorithm, called the symmetric tensor eigen-rank-one iterative decomposition (STEROID), for decomposing a symmetric tensor into a real linear combination of symmetric rank-1 unit-norm outer factors using only eigendecompositions and least-squares fitting. Originally designed for a symmetric tensor with an order being a power of two, STEROID is shown to be applicable to any order through an innovative tensor embedding technique. Numerical examples demonstrate the high efficiency and accuracy of the proposed scheme even for large scale problems. Furthermore, we show how STEROID readily solves a problem in nonlinear block-structured system identification and nonlinear state-space identification

    Preconditioned low-rank Riemannian optimization for linear systems with tensor product structure

    Full text link
    The numerical solution of partial differential equations on high-dimensional domains gives rise to computationally challenging linear systems. When using standard discretization techniques, the size of the linear system grows exponentially with the number of dimensions, making the use of classic iterative solvers infeasible. During the last few years, low-rank tensor approaches have been developed that allow to mitigate this curse of dimensionality by exploiting the underlying structure of the linear operator. In this work, we focus on tensors represented in the Tucker and tensor train formats. We propose two preconditioned gradient methods on the corresponding low-rank tensor manifolds: A Riemannian version of the preconditioned Richardson method as well as an approximate Newton scheme based on the Riemannian Hessian. For the latter, considerable attention is given to the efficient solution of the resulting Newton equation. In numerical experiments, we compare the efficiency of our Riemannian algorithms with other established tensor-based approaches such as a truncated preconditioned Richardson method and the alternating linear scheme. The results show that our approximate Riemannian Newton scheme is significantly faster in cases when the application of the linear operator is expensive.Comment: 24 pages, 8 figure

    Approximate matrix and tensor diagonalization by unitary transformations: convergence of Jacobi-type algorithms

    Full text link
    We propose a gradient-based Jacobi algorithm for a class of maximization problems on the unitary group, with a focus on approximate diagonalization of complex matrices and tensors by unitary transformations. We provide weak convergence results, and prove local linear convergence of this algorithm.The convergence results also apply to the case of real-valued tensors

    Jacobi-type algorithm for low rank orthogonal approximation of symmetric tensors and its convergence analysis

    Full text link
    In this paper, we propose a Jacobi-type algorithm to solve the low rank orthogonal approximation problem of symmetric tensors. This algorithm includes as a special case the well-known Jacobi CoM2 algorithm for the approximate orthogonal diagonalization problem of symmetric tensors. We first prove the weak convergence of this algorithm, \textit{i.e.} any accumulation point is a stationary point. Then we study the global convergence of this algorithm under a gradient based ordering for a special case: the best rank-2 orthogonal approximation of 3rd order symmetric tensors, and prove that an accumulation point is the unique limit point under some conditions. Numerical experiments are presented to show the efficiency of this algorithm.Comment: 19 pages, 4 figure

    A literature survey of low-rank tensor approximation techniques

    Full text link
    During the last years, low-rank tensor approximation has been established as a new tool in scientific computing to address large-scale linear and multilinear algebra problems, which would be intractable by classical techniques. This survey attempts to give a literature overview of current developments in this area, with an emphasis on function-related tensors
    • …
    corecore