2 research outputs found

    Matrix Approximation under Local Low-Rank Assumption

    Full text link
    Matrix approximation is a common tool in machine learning for building accurate prediction models for recommendation systems, text mining, and computer vision. A prevalent assumption in constructing matrix approximations is that the partially observed matrix is of low-rank. We propose a new matrix approximation model where we assume instead that the matrix is only locally of low-rank, leading to a representation of the observed matrix as a weighted sum of low-rank matrices. We analyze the accuracy of the proposed local low-rank modeling. Our experiments show improvements in prediction accuracy in recommendation tasks.Comment: 3 pages, 2 figures, Workshop submission to the First International Conference on Learning Representations (ICLR

    Fast Rank Reduction for Non-negative Matrices via Mean Field Theory

    Full text link
    We propose an efficient matrix rank reduction method for non-negative matrices, whose time complexity is quadratic in the number of rows or columns of a matrix. Our key insight is to formulate rank reduction as a mean-field approximation by modeling matrices via a log-linear model on structured sample space, which allows us to solve the rank reduction as convex optimization. The highlight of this formulation is that the optimal solution that minimizes the KL divergence from a given matrix can be analytically computed in a closed form. We empirically show that our rank reduction method is faster than NMF and its popular variant, lraNMF, while achieving competitive low rank approximation error on synthetic and real-world datasets.Comment: 10 pages, 4 figure
    corecore