884 research outputs found

    A constructive arbitrary-degree Kronecker product decomposition of tensors

    Get PDF
    We propose the tensor Kronecker product singular value decomposition~(TKPSVD) that decomposes a real kk-way tensor A\mathcal{A} into a linear combination of tensor Kronecker products with an arbitrary number of dd factors A=∑j=1Rσj Aj(d)⊗⋯⊗Aj(1)\mathcal{A} = \sum_{j=1}^R \sigma_j\, \mathcal{A}^{(d)}_j \otimes \cdots \otimes \mathcal{A}^{(1)}_j. We generalize the matrix Kronecker product to tensors such that each factor Aj(i)\mathcal{A}^{(i)}_j in the TKPSVD is a kk-way tensor. The algorithm relies on reshaping and permuting the original tensor into a dd-way tensor, after which a polyadic decomposition with orthogonal rank-1 terms is computed. We prove that for many different structured tensors, the Kronecker product factors Aj(1),…,Aj(d)\mathcal{A}^{(1)}_j,\ldots,\mathcal{A}^{(d)}_j are guaranteed to inherit this structure. In addition, we introduce the new notion of general symmetric tensors, which includes many different structures such as symmetric, persymmetric, centrosymmetric, Toeplitz and Hankel tensors.Comment: Rewrote the paper completely and generalized everything to tensor

    Symmetric Tensor Decomposition by an Iterative Eigendecomposition Algorithm

    Get PDF
    We present an iterative algorithm, called the symmetric tensor eigen-rank-one iterative decomposition (STEROID), for decomposing a symmetric tensor into a real linear combination of symmetric rank-1 unit-norm outer factors using only eigendecompositions and least-squares fitting. Originally designed for a symmetric tensor with an order being a power of two, STEROID is shown to be applicable to any order through an innovative tensor embedding technique. Numerical examples demonstrate the high efficiency and accuracy of the proposed scheme even for large scale problems. Furthermore, we show how STEROID readily solves a problem in nonlinear block-structured system identification and nonlinear state-space identification

    Training Input-Output Recurrent Neural Networks through Spectral Methods

    Get PDF
    We consider the problem of training input-output recurrent neural networks (RNN) for sequence labeling tasks. We propose a novel spectral approach for learning the network parameters. It is based on decomposition of the cross-moment tensor between the output and a non-linear transformation of the input, based on score functions. We guarantee consistent learning with polynomial sample and computational complexity under transparent conditions such as non-degeneracy of model parameters, polynomial activations for the neurons, and a Markovian evolution of the input sequence. We also extend our results to Bidirectional RNN which uses both previous and future information to output the label at each time point, and is employed in many NLP tasks such as POS tagging

    Low-rank approximate inverse for preconditioning tensor-structured linear systems

    Full text link
    In this paper, we propose an algorithm for the construction of low-rank approximations of the inverse of an operator given in low-rank tensor format. The construction relies on an updated greedy algorithm for the minimization of a suitable distance to the inverse operator. It provides a sequence of approximations that are defined as the projections of the inverse operator in an increasing sequence of linear subspaces of operators. These subspaces are obtained by the tensorization of bases of operators that are constructed from successive rank-one corrections. In order to handle high-order tensors, approximate projections are computed in low-rank Hierarchical Tucker subsets of the successive subspaces of operators. Some desired properties such as symmetry or sparsity can be imposed on the approximate inverse operator during the correction step, where an optimal rank-one correction is searched as the tensor product of operators with the desired properties. Numerical examples illustrate the ability of this algorithm to provide efficient preconditioners for linear systems in tensor format that improve the convergence of iterative solvers and also the quality of the resulting low-rank approximations of the solution
    • …
    corecore