10 research outputs found

    New Guarantees for Blind Compressed Sensing

    Full text link
    Blind Compressed Sensing (BCS) is an extension of Compressed Sensing (CS) where the optimal sparsifying dictionary is assumed to be unknown and subject to estimation (in addition to the CS sparse coefficients). Since the emergence of BCS, dictionary learning, a.k.a. sparse coding, has been studied as a matrix factorization problem where its sample complexity, uniqueness and identifiability have been addressed thoroughly. However, in spite of the strong connections between BCS and sparse coding, recent results from the sparse coding problem area have not been exploited within the context of BCS. In particular, prior BCS efforts have focused on learning constrained and complete dictionaries that limit the scope and utility of these efforts. In this paper, we develop new theoretical bounds for perfect recovery for the general unconstrained BCS problem. These unconstrained BCS bounds cover the case of overcomplete dictionaries, and hence, they go well beyond the existing BCS theory. Our perfect recovery results integrate the combinatorial theories of sparse coding with some of the recent results from low-rank matrix recovery. In particular, we propose an efficient CS measurement scheme that results in practical recovery bounds for BCS. Moreover, we discuss the performance of BCS under polynomial-time sparse coding algorithms.Comment: To appear in the 53rd Annual Allerton Conference on Communication, Control and Computing, University of Illinois at Urbana-Champaign, IL, USA, 201

    Fast and Sample-Efficient Federated Low Rank Matrix Recovery from Column-wise Linear and Quadratic Projections

    Full text link
    This work studies the following problem and its magnitude-only extension: develop a federated solution to recover an nΓ—qn \times q rank-rr matrix, Xβˆ—=[x1βˆ—,x2βˆ—,...xqβˆ—]X^* =[x^*_1 , x^*_2 ,...x^*_q], from mm independent linear projections of each of its columns, i.e., from yk:=Akxkβˆ—,k∈[q]y_k := A_k x^*_k , k \in [q], where yky_k is an mm-length vector. Even though low-rank recovery problems have been extensively studied in the last decade, this particular problem has received surprisingly little attention. There exist only two provable solutions with a reasonable sample complexity, both of which are slow, have sub-optimal sample-complexity, and cannot be federated efficiently. We introduce a novel gradient descent (GD) based solution called GD-min that needs only Ξ©((n+q)r2log⁑(1/Ο΅))\Omega((n+q) r^2 \log(1/\epsilon)) samples and O(mqnrlog⁑(1/Ο΅))O( mq nr \log (1/\epsilon)) time to obtain an Ο΅\epsilon-accurate estimate. Based on comparison with other well-studied problems, this is the best achievable sample complexity guarantee for a non-convex solution to the above problem. The time complexity is nearly linear and cannot be improved significantly either. Finally, in a federated setting, our solution has low communication cost and maintains privacy of the nodes' data and of the corresponding column estimates

    Subspace Learning from Extremely Compressed Measurements

    No full text
    <p>We consider learning the principal subspace of a large set of vectors from an extremely small number of compressive measurements of each vector. Our theoretical results show that even a constant number of measurements per column suffices to approximate the principal subspace to arbitrary precision, provided that the number of vectors is large. This result is achieved by a simple algorithm that computes the eigenvectors of an estimate of the covariance matrix. The main insight is to exploit an averaging effect that arises from applying a different random projection to each vector. We provide a number of simulations confirming our theoretical results</p

    Subspace learning from extremely compressed measurements

    No full text
    corecore