371 research outputs found

    A Nonconvex Splitting Method for Symmetric Nonnegative Matrix Factorization: Convergence Analysis and Optimality

    Get PDF
    Symmetric nonnegative matrix factorization (SymNMF) has important applications in data analytics problems such as document clustering, community detection and image segmentation. In this paper, we propose a novel nonconvex variable splitting method for solving SymNMF. The proposed algorithm is guaranteed to converge to the set of Karush-Kuhn-Tucker (KKT) points of the nonconvex SymNMF problem. Furthermore, it achieves a global sublinear convergence rate. We also show that the algorithm can be efficiently implemented in parallel. Further, sufficient conditions are provided which guarantee the global and local optimality of the obtained solutions. Extensive numerical results performed on both synthetic and real data sets suggest that the proposed algorithm converges quickly to a local minimum solution.Comment: IEEE Transactions on Signal Processing (to appear

    Convergence of block coordinate descent with diminishing radius for nonconvex optimization

    Full text link
    Block coordinate descent (BCD), also known as nonlinear Gauss-Seidel, is a simple iterative algorithm for nonconvex optimization that sequentially minimizes the objective function in each block coordinate while the other coordinates are held fixed. We propose a version of BCD that is guaranteed to converge to the stationary points of block-wise convex and differentiable objective functions under constraints. Furthermore, we obtain a best-case rate of convergence of order logn/n\log n/\sqrt{n}, where nn denotes the number of iterations. A key idea is to restrict the parameter search within a diminishing radius to promote stability of iterates, and then to show that such auxiliary constraints vanish in the limit. As an application, we provide a modified alternating least squares algorithm for nonnegative CP tensor factorization that converges to the stationary points of the reconstruction error with the same bound on the best-case rate of convergence. We also experimentally validate our results with both synthetic and real-world data.Comment: 12 pages, 2 figure. Rate of convergence added. arXiv admin note: text overlap with arXiv:2009.0761

    EXTRAPOLATED ALTERNATING ALGORITHMS FOR APPROXIMATE CANONICAL POLYADIC DECOMPOSITION

    Get PDF
    Tensor decompositions have become a central tool in machine learning to extract interpretable patterns from multiway arrays of data. However, computing the approximate Canonical Polyadic Decomposition (aCPD), one of the most important tensor decomposition model, remains a challenge. In this work, we propose several algorithms based on extrapolation that improve over existing alternating methods for aCPD. We show on several simulated and real data sets that carefully designed extrapolation can significantly improve the convergence speed hence reduce the computational time, especially in difficult scenarios
    corecore