8,145 research outputs found

    Symmetric tensor decomposition

    Get PDF
    We present an algorithm for decomposing a symmetric tensor, of dimension n and order d as a sum of rank-1 symmetric tensors, extending the algorithm of Sylvester devised in 1886 for binary forms. We recall the correspondence between the decomposition of a homogeneous polynomial in n variables of total degree d as a sum of powers of linear forms (Waring's problem), incidence properties on secant varieties of the Veronese Variety and the representation of linear forms as a linear combination of evaluations at distinct points. Then we reformulate Sylvester's approach from the dual point of view. Exploiting this duality, we propose necessary and sufficient conditions for the existence of such a decomposition of a given rank, using the properties of Hankel (and quasi-Hankel) matrices, derived from multivariate polynomials and normal form computations. This leads to the resolution of polynomial equations of small degree in non-generic cases. We propose a new algorithm for symmetric tensor decomposition, based on this characterization and on linear algebra computations with these Hankel matrices. The impact of this contribution is two-fold. First it permits an efficient computation of the decomposition of any tensor of sub-generic rank, as opposed to widely used iterative algorithms with unproved global convergence (e.g. Alternate Least Squares or gradient descents). Second, it gives tools for understanding uniqueness conditions, and for detecting the rank

    Provable Sparse Tensor Decomposition

    Full text link
    We propose a novel sparse tensor decomposition method, namely Tensor Truncated Power (TTP) method, that incorporates variable selection into the estimation of decomposition components. The sparsity is achieved via an efficient truncation step embedded in the tensor power iteration. Our method applies to a broad family of high dimensional latent variable models, including high dimensional Gaussian mixture and mixtures of sparse regressions. A thorough theoretical investigation is further conducted. In particular, we show that the final decomposition estimator is guaranteed to achieve a local statistical rate, and further strengthen it to the global statistical rate by introducing a proper initialization procedure. In high dimensional regimes, the obtained statistical rate significantly improves those shown in the existing non-sparse decomposition methods. The empirical advantages of TTP are confirmed in extensive simulated results and two real applications of click-through rate prediction and high-dimensional gene clustering.Comment: To Appear in JRSS-

    Convex Tensor Decomposition via Structured Schatten Norm Regularization

    Full text link
    We discuss structured Schatten norms for tensor decomposition that includes two recently proposed norms ("overlapped" and "latent") for convex-optimization-based tensor decomposition, and connect tensor decomposition with wider literature on structured sparsity. Based on the properties of the structured Schatten norms, we mathematically analyze the performance of "latent" approach for tensor decomposition, which was empirically found to perform better than the "overlapped" approach in some settings. We show theoretically that this is indeed the case. In particular, when the unknown true tensor is low-rank in a specific mode, this approach performs as good as knowing the mode with the smallest rank. Along the way, we show a novel duality result for structures Schatten norms, establish the consistency, and discuss the identifiability of this approach. We confirm through numerical simulations that our theoretical prediction can precisely predict the scaling behavior of the mean squared error.Comment: 12 pages, 3 figure

    Tensor decomposition with generalized lasso penalties

    Full text link
    We present an approach for penalized tensor decomposition (PTD) that estimates smoothly varying latent factors in multi-way data. This generalizes existing work on sparse tensor decomposition and penalized matrix decompositions, in a manner parallel to the generalized lasso for regression and smoothing problems. Our approach presents many nontrivial challenges at the intersection of modeling and computation, which are studied in detail. An efficient coordinate-wise optimization algorithm for (PTD) is presented, and its convergence properties are characterized. The method is applied both to simulated data and real data on flu hospitalizations in Texas. These results show that our penalized tensor decomposition can offer major improvements on existing methods for analyzing multi-way data that exhibit smooth spatial or temporal features

    Tensor decomposition and homotopy continuation

    Get PDF
    A computationally challenging classical elimination theory problem is to compute polynomials which vanish on the set of tensors of a given rank. By moving away from computing polynomials via elimination theory to computing pseudowitness sets via numerical elimination theory, we develop computational methods for computing ranks and border ranks of tensors along with decompositions. More generally, we present our approach using joins of any collection of irreducible and nondegenerate projective varieties X1,,XkPNX_1,\ldots,X_k\subset\mathbb{P}^N defined over C\mathbb{C}. After computing ranks over C\mathbb{C}, we also explore computing real ranks. Various examples are included to demonstrate this numerical algebraic geometric approach.Comment: We have added two examples: A Coppersmith-Winograd tensor, Matrix multiplication with zeros. (26 pages, 1 figure

    Clustering Patients with Tensor Decomposition

    Get PDF
    In this paper we present a method for the unsupervised clustering of high-dimensional binary data, with a special focus on electronic healthcare records. We present a robust and efficient heuristic to face this problem using tensor decomposition. We present the reasons why this approach is preferable for tasks such as clustering patient records, to more commonly used distance-based methods. We run the algorithm on two datasets of healthcare records, obtaining clinically meaningful results.Comment: Presented at 2017 Machine Learning for Healthcare Conference (MLHC 2017). Boston, M
    corecore