710 research outputs found

    Rank-1 Tensor Approximation Methods and Application to Deflation

    Full text link
    Because of the attractiveness of the canonical polyadic (CP) tensor decomposition in various applications, several algorithms have been designed to compute it, but efficient ones are still lacking. Iterative deflation algorithms based on successive rank-1 approximations can be used to perform this task, since the latter are rather easy to compute. We first present an algebraic rank-1 approximation method that performs better than the standard higher-order singular value decomposition (HOSVD) for three-way tensors. Second, we propose a new iterative rank-1 approximation algorithm that improves any other rank-1 approximation method. Third, we describe a probabilistic framework allowing to study the convergence of deflation CP decomposition (DCPD) algorithms based on successive rank-1 approximations. A set of computer experiments then validates theoretical results and demonstrates the efficiency of DCPD algorithms compared to other ones

    Online and Differentially-Private Tensor Decomposition

    Get PDF
    In this paper, we resolve many of the key algorithmic questions regarding robustness, memory efficiency, and differential privacy of tensor decomposition. We propose simple variants of the tensor power method which enjoy these strong properties. We present the first guarantees for online tensor power method which has a linear memory requirement. Moreover, we present a noise calibrated tensor power method with efficient privacy guarantees. At the heart of all these guarantees lies a careful perturbation analysis derived in this paper which improves up on the existing results significantly.Comment: 19 pages, 9 figures. To appear at the 30th Annual Conference on Advances in Neural Information Processing Systems (NIPS 2016), to be held at Barcelona, Spain. Fix small typos in proofs of Lemmas C.5 and C.

    Uniqueness of Nonnegative Tensor Approximations

    Full text link
    We show that for a nonnegative tensor, a best nonnegative rank-r approximation is almost always unique, its best rank-one approximation may always be chosen to be a best nonnegative rank-one approximation, and that the set of nonnegative tensors with non-unique best rank-one approximations form an algebraic hypersurface. We show that the last part holds true more generally for real tensors and thereby determine a polynomial equation so that a real or nonnegative tensor which does not satisfy this equation is guaranteed to have a unique best rank-one approximation. We also establish an analogue for real or nonnegative symmetric tensors. In addition, we prove a singular vector variant of the Perron--Frobenius Theorem for positive tensors and apply it to show that a best nonnegative rank-r approximation of a positive tensor can never be obtained by deflation. As an aside, we verify that the Euclidean distance (ED) discriminants of the Segre variety and the Veronese variety are hypersurfaces and give defining equations of these ED discriminants

    Tensor decompositions for learning latent variable models

    Get PDF
    This work considers a computationally and statistically efficient parameter estimation method for a wide class of latent variable models---including Gaussian mixture models, hidden Markov models, and latent Dirichlet allocation---which exploits a certain tensor structure in their low-order observable moments (typically, of second- and third-order). Specifically, parameter estimation is reduced to the problem of extracting a certain (orthogonal) decomposition of a symmetric tensor derived from the moments; this decomposition can be viewed as a natural generalization of the singular value decomposition for matrices. Although tensor decompositions are generally intractable to compute, the decomposition of these specially structured tensors can be efficiently obtained by a variety of approaches, including power iterations and maximization approaches (similar to the case of matrices). A detailed analysis of a robust tensor power method is provided, establishing an analogue of Wedin's perturbation theorem for the singular vectors of matrices. This implies a robust and computationally tractable estimation approach for several popular latent variable models

    On isolation of singular zeros of multivariate analytic systems

    Full text link
    We give a separation bound for an isolated multiple root xx of a square multivariate analytic system ff satisfying that an operator deduced by adding Df(x)Df(x) and a projection of D2f(x)D^2f(x) in a direction of the kernel of Df(x)Df(x) is invertible. We prove that the deflation process applied on ff and this kind of roots terminates after only one iteration. When xx is only given approximately, we give a numerical criterion for isolating a cluster of zeros of ff near xx. We also propose a lower bound of the number of roots in the cluster.Comment: 17 page
    • …
    corecore