5 research outputs found

    PU Learning for Matrix Completion

    Full text link
    In this paper, we consider the matrix completion problem when the observations are one-bit measurements of some underlying matrix M, and in particular the observed samples consist only of ones and no zeros. This problem is motivated by modern applications such as recommender systems and social networks where only "likes" or "friendships" are observed. The problem of learning from only positive and unlabeled examples, called PU (positive-unlabeled) learning, has been studied in the context of binary classification. We consider the PU matrix completion problem, where an underlying real-valued matrix M is first quantized to generate one-bit observations and then a subset of positive entries is revealed. Under the assumption that M has bounded nuclear norm, we provide recovery guarantees for two different observation models: 1) M parameterizes a distribution that generates a binary matrix, 2) M is thresholded to obtain a binary matrix. For the first case, we propose a "shifted matrix completion" method that recovers M using only a subset of indices corresponding to ones, while for the second case, we propose a "biased matrix completion" method that recovers the (thresholded) binary matrix. Both methods yield strong error bounds --- if M is n by n, the Frobenius error is bounded as O(1/((1-rho)n), where 1-rho denotes the fraction of ones observed. This implies a sample complexity of O(n\log n) ones to achieve a small error, when M is dense and n is large. We extend our methods and guarantees to the inductive matrix completion problem, where rows and columns of M have associated features. We provide efficient and scalable optimization procedures for both the methods and demonstrate the effectiveness of the proposed methods for link prediction (on real-world networks consisting of over 2 million nodes and 90 million links) and semi-supervised clustering tasks

    Gaussian-Gamma collaborative filtering: a hierarchical Bayesian model for recommender systems

    Get PDF
    The traditional collaborative filtering (CF) suffers from two key challenges, namely, the normal assumption that it is not robust, and it is difficult to set in advance the penalty terms of the latent features. We therefore propose a hierarchical Bayesian model-based CF and the related inference algorithm. Specifically, we impose a Gaussian-Gamma prior on the ratings, and the latent features. We show the model is more robust, and the penalty terms can be adapted automatically in the inference. We use Gibbs sampler for the inference and provide a statistical explanation. We verify the performance using both synthetic and real dataset
    corecore