4,142 research outputs found

    New Guarantees for Blind Compressed Sensing

    Full text link
    Blind Compressed Sensing (BCS) is an extension of Compressed Sensing (CS) where the optimal sparsifying dictionary is assumed to be unknown and subject to estimation (in addition to the CS sparse coefficients). Since the emergence of BCS, dictionary learning, a.k.a. sparse coding, has been studied as a matrix factorization problem where its sample complexity, uniqueness and identifiability have been addressed thoroughly. However, in spite of the strong connections between BCS and sparse coding, recent results from the sparse coding problem area have not been exploited within the context of BCS. In particular, prior BCS efforts have focused on learning constrained and complete dictionaries that limit the scope and utility of these efforts. In this paper, we develop new theoretical bounds for perfect recovery for the general unconstrained BCS problem. These unconstrained BCS bounds cover the case of overcomplete dictionaries, and hence, they go well beyond the existing BCS theory. Our perfect recovery results integrate the combinatorial theories of sparse coding with some of the recent results from low-rank matrix recovery. In particular, we propose an efficient CS measurement scheme that results in practical recovery bounds for BCS. Moreover, we discuss the performance of BCS under polynomial-time sparse coding algorithms.Comment: To appear in the 53rd Annual Allerton Conference on Communication, Control and Computing, University of Illinois at Urbana-Champaign, IL, USA, 201

    Subspace Methods for Joint Sparse Recovery

    Full text link
    We propose robust and efficient algorithms for the joint sparse recovery problem in compressed sensing, which simultaneously recover the supports of jointly sparse signals from their multiple measurement vectors obtained through a common sensing matrix. In a favorable situation, the unknown matrix, which consists of the jointly sparse signals, has linearly independent nonzero rows. In this case, the MUSIC (MUltiple SIgnal Classification) algorithm, originally proposed by Schmidt for the direction of arrival problem in sensor array processing and later proposed and analyzed for joint sparse recovery by Feng and Bresler, provides a guarantee with the minimum number of measurements. We focus instead on the unfavorable but practically significant case of rank-defect or ill-conditioning. This situation arises with limited number of measurement vectors, or with highly correlated signal components. In this case MUSIC fails, and in practice none of the existing methods can consistently approach the fundamental limit. We propose subspace-augmented MUSIC (SA-MUSIC), which improves on MUSIC so that the support is reliably recovered under such unfavorable conditions. Combined with subspace-based greedy algorithms also proposed and analyzed in this paper, SA-MUSIC provides a computationally efficient algorithm with a performance guarantee. The performance guarantees are given in terms of a version of restricted isometry property. In particular, we also present a non-asymptotic perturbation analysis of the signal subspace estimation that has been missing in the previous study of MUSIC.Comment: submitted to IEEE transactions on Information Theory, revised versio

    Structured random measurements in signal processing

    Full text link
    Compressed sensing and its extensions have recently triggered interest in randomized signal acquisition. A key finding is that random measurements provide sparse signal reconstruction guarantees for efficient and stable algorithms with a minimal number of samples. While this was first shown for (unstructured) Gaussian random measurement matrices, applications require certain structure of the measurements leading to structured random measurement matrices. Near optimal recovery guarantees for such structured measurements have been developed over the past years in a variety of contexts. This article surveys the theory in three scenarios: compressed sensing (sparse recovery), low rank matrix recovery, and phaseless estimation. The random measurement matrices to be considered include random partial Fourier matrices, partial random circulant matrices (subsampled convolutions), matrix completion, and phase estimation from magnitudes of Fourier type measurements. The article concludes with a brief discussion of the mathematical techniques for the analysis of such structured random measurements.Comment: 22 pages, 2 figure

    Improving compressed sensing with the diamond norm

    Full text link
    In low-rank matrix recovery, one aims to reconstruct a low-rank matrix from a minimal number of linear measurements. Within the paradigm of compressed sensing, this is made computationally efficient by minimizing the nuclear norm as a convex surrogate for rank. In this work, we identify an improved regularizer based on the so-called diamond norm, a concept imported from quantum information theory. We show that -for a class of matrices saturating a certain norm inequality- the descent cone of the diamond norm is contained in that of the nuclear norm. This suggests superior reconstruction properties for these matrices. We explicitly characterize this set of matrices. Moreover, we demonstrate numerically that the diamond norm indeed outperforms the nuclear norm in a number of relevant applications: These include signal analysis tasks such as blind matrix deconvolution or the retrieval of certain unitary basis changes, as well as the quantum information problem of process tomography with random measurements. The diamond norm is defined for matrices that can be interpreted as order-4 tensors and it turns out that the above condition depends crucially on that tensorial structure. In this sense, this work touches on an aspect of the notoriously difficult tensor completion problem.Comment: 25 pages + Appendix, 7 Figures, published versio
    • …
    corecore