16 research outputs found

    Numerical Optimization for Symmetric Tensor Decomposition

    Full text link
    We consider the problem of decomposing a real-valued symmetric tensor as the sum of outer products of real-valued vectors. Algebraic methods exist for computing complex-valued decompositions of symmetric tensors, but here we focus on real-valued decompositions, both unconstrained and nonnegative, for problems with low-rank structure. We discuss when solutions exist and how to formulate the mathematical program. Numerical results show the properties of the proposed formulations (including one that ignores symmetry) on a set of test problems and illustrate that these straightforward formulations can be effective even though the problem is nonconvex

    Provable Sparse Tensor Decomposition

    Full text link
    We propose a novel sparse tensor decomposition method, namely Tensor Truncated Power (TTP) method, that incorporates variable selection into the estimation of decomposition components. The sparsity is achieved via an efficient truncation step embedded in the tensor power iteration. Our method applies to a broad family of high dimensional latent variable models, including high dimensional Gaussian mixture and mixtures of sparse regressions. A thorough theoretical investigation is further conducted. In particular, we show that the final decomposition estimator is guaranteed to achieve a local statistical rate, and further strengthen it to the global statistical rate by introducing a proper initialization procedure. In high dimensional regimes, the obtained statistical rate significantly improves those shown in the existing non-sparse decomposition methods. The empirical advantages of TTP are confirmed in extensive simulated results and two real applications of click-through rate prediction and high-dimensional gene clustering.Comment: To Appear in JRSS-

    CP decomposition and low-rank approximation of antisymmetric tensors

    Full text link
    For the antisymmetric tensors the paper examines a low-rank approximation which is represented via only three vectors. We describe a suitable low-rank format and propose an alternating least squares structure-preserving algorithm for finding such approximation. The case of partial antisymmetry is also discussed. The algorithms are implemented in Julia programming language and their numerical performance is discussed.Comment: 16 pages, 4 table

    Tensor decompositions for Face Recognition

    Get PDF
    Automatic Face Recognition has become increasingly important in the past few years due to its several applications in daily life, such as in social media platforms and security services. Numerical linear algebra tools such as the SVD (Singular Value Decomposition) have been extensively used to allow machines to automatically process images in the recognition and classification contexts. On the other hand, several factors such as expression, view angle and illumination can significantly affect the image, making the processing more complex. To cope with these additional features, multilinear algebra tools, such as high-order tensors are being explored. In this thesis we first analyze tensor calculus and tensor approximation via several dif- ferent decompositions that have been recently proposed, which include HOSVD (Higher-Order Singular Value Decomposition) and Tensor-Train formats. A new algorithm is proposed to perform data recognition for the latter format

    Algorithmic Regularization in Tensor Optimization: Towards a Lifted Approach in Matrix Sensing

    Full text link
    Gradient descent (GD) is crucial for generalization in machine learning models, as it induces implicit regularization, promoting compact representations. In this work, we examine the role of GD in inducing implicit regularization for tensor optimization, particularly within the context of the lifted matrix sensing framework. This framework has been recently proposed to address the non-convex matrix sensing problem by transforming spurious solutions into strict saddles when optimizing over symmetric, rank-1 tensors. We show that, with sufficiently small initialization scale, GD applied to this lifted problem results in approximate rank-1 tensors and critical points with escape directions. Our findings underscore the significance of the tensor parametrization of matrix sensing, in combination with first-order methods, in achieving global optimality in such problems.Comment: NeurIPS23 Poste
    corecore