4,291 research outputs found

    1-Bit Matrix Completion under Exact Low-Rank Constraint

    Full text link
    We consider the problem of noisy 1-bit matrix completion under an exact rank constraint on the true underlying matrix MM^*. Instead of observing a subset of the noisy continuous-valued entries of a matrix MM^*, we observe a subset of noisy 1-bit (or binary) measurements generated according to a probabilistic model. We consider constrained maximum likelihood estimation of MM^*, under a constraint on the entry-wise infinity-norm of MM^* and an exact rank constraint. This is in contrast to previous work which has used convex relaxations for the rank. We provide an upper bound on the matrix estimation error under this model. Compared to the existing results, our bound has faster convergence rate with matrix dimensions when the fraction of revealed 1-bit observations is fixed, independent of the matrix dimensions. We also propose an iterative algorithm for solving our nonconvex optimization with a certificate of global optimality of the limiting point. This algorithm is based on low rank factorization of MM^*. We validate the method on synthetic and real data with improved performance over existing methods.Comment: 6 pages, 3 figures, to appear in CISS 201

    Chebyshev Polynomial Approximation for Distributed Signal Processing

    Get PDF
    Unions of graph Fourier multipliers are an important class of linear operators for processing signals defined on graphs. We present a novel method to efficiently distribute the application of these operators to the high-dimensional signals collected by sensor networks. The proposed method features approximations of the graph Fourier multipliers by shifted Chebyshev polynomials, whose recurrence relations make them readily amenable to distributed computation. We demonstrate how the proposed method can be used in a distributed denoising task, and show that the communication requirements of the method scale gracefully with the size of the network.Comment: 8 pages, 5 figures, to appear in the Proceedings of the IEEE International Conference on Distributed Computing in Sensor Systems (DCOSS), June, 2011, Barcelona, Spai

    A Channel Coding Perspective of Collaborative Filtering

    Full text link
    We consider the problem of collaborative filtering from a channel coding perspective. We model the underlying rating matrix as a finite alphabet matrix with block constant structure. The observations are obtained from this underlying matrix through a discrete memoryless channel with a noisy part representing noisy user behavior and an erasure part representing missing data. Moreover, the clusters over which the underlying matrix is constant are {\it unknown}. We establish a sharp threshold result for this model: if the largest cluster size is smaller than C1log(mn)C_1 \log(mn) (where the rating matrix is of size m×nm \times n), then the underlying matrix cannot be recovered with any estimator, but if the smallest cluster size is larger than C2log(mn)C_2 \log(mn), then we show a polynomial time estimator with diminishing probability of error. In the case of uniform cluster size, not only the order of the threshold, but also the constant is identified.Comment: 32 pages, 1 figure, Submitted to IEEE Transactions on Information Theor
    corecore