1,277 research outputs found

    Algorithms for Positive Semidefinite Factorization

    Full text link
    This paper considers the problem of positive semidefinite factorization (PSD factorization), a generalization of exact nonnegative matrix factorization. Given an mm-by-nn nonnegative matrix XX and an integer kk, the PSD factorization problem consists in finding, if possible, symmetric kk-by-kk positive semidefinite matrices {A1,...,Am}\{A^1,...,A^m\} and {B1,...,Bn}\{B^1,...,B^n\} such that Xi,j=trace(AiBj)X_{i,j}=\text{trace}(A^iB^j) for i=1,...,mi=1,...,m, and j=1,...nj=1,...n. PSD factorization is NP-hard. In this work, we introduce several local optimization schemes to tackle this problem: a fast projected gradient method and two algorithms based on the coordinate descent framework. The main application of PSD factorization is the computation of semidefinite extensions, that is, the representations of polyhedrons as projections of spectrahedra, for which the matrix to be factorized is the slack matrix of the polyhedron. We compare the performance of our algorithms on this class of problems. In particular, we compute the PSD extensions of size k=1+log2(n)k=1+ \lceil \log_2(n) \rceil for the regular nn-gons when n=5n=5, 88 and 1010. We also show how to generalize our algorithms to compute the square root rank (which is the size of the factors in a PSD factorization where all factor matrices AiA^i and BjB^j have rank one) and completely PSD factorizations (which is the special case where the input matrix is symmetric and equality Ai=BiA^i=B^i is required for all ii).Comment: 21 pages, 3 figures, 3 table

    Greedy Algorithms for Cone Constrained Optimization with Convergence Guarantees

    Full text link
    Greedy optimization methods such as Matching Pursuit (MP) and Frank-Wolfe (FW) algorithms regained popularity in recent years due to their simplicity, effectiveness and theoretical guarantees. MP and FW address optimization over the linear span and the convex hull of a set of atoms, respectively. In this paper, we consider the intermediate case of optimization over the convex cone, parametrized as the conic hull of a generic atom set, leading to the first principled definitions of non-negative MP algorithms for which we give explicit convergence rates and demonstrate excellent empirical performance. In particular, we derive sublinear (O(1/t)\mathcal{O}(1/t)) convergence on general smooth and convex objectives, and linear convergence (O(et)\mathcal{O}(e^{-t})) on strongly convex objectives, in both cases for general sets of atoms. Furthermore, we establish a clear correspondence of our algorithms to known algorithms from the MP and FW literature. Our novel algorithms and analyses target general atom sets and general objective functions, and hence are directly applicable to a large variety of learning settings.Comment: NIPS 201

    Descent methods for Nonnegative Matrix Factorization

    Full text link
    In this paper, we present several descent methods that can be applied to nonnegative matrix factorization and we analyze a recently developped fast block coordinate method called Rank-one Residue Iteration (RRI). We also give a comparison of these different methods and show that the new block coordinate method has better properties in terms of approximation error and complexity. By interpreting this method as a rank-one approximation of the residue matrix, we prove that it \emph{converges} and also extend it to the nonnegative tensor factorization and introduce some variants of the method by imposing some additional controllable constraints such as: sparsity, discreteness and smoothness.Comment: 47 pages. New convergence proof using damped version of RRI. To appear in Numerical Linear Algebra in Signals, Systems and Control. Accepted. Illustrating Matlab code is included in the source bundl
    corecore