19,488 research outputs found

    Efficient Matrix Sensing Using Rank-1 Gaussian Measurements

    Get PDF
    Abstract. In this paper, we study the problem of low-rank matrix sensing where the goal is to reconstruct a matrix exactly using a small number of linear measurements. Existing methods for the problem either rely on measurement operators such as random element-wise sampling which cannot recover arbitrary low-rank matrices or require the measurement operator to satisfy the Restricted Isometry Property (RIP). However, RIP based linear operators are generally full rank and require large computation/storage cost for both measurement (encoding) as well as reconstruction (decoding). In this paper, we propose simple rank-one Gaussian measurement operators for matrix sensing that are significantly less expensive in terms of memory and computation for both encoding and decoding. Moreover, we show that the matrix can be reconstructed exactly using a simple alternating minimization method as well as a nuclear-norm minimization method. Finally, we demonstrate the effectiveness of the measurement scheme vis-a-vis existing RIP based methods

    Info-Greedy sequential adaptive compressed sensing

    Full text link
    We present an information-theoretic framework for sequential adaptive compressed sensing, Info-Greedy Sensing, where measurements are chosen to maximize the extracted information conditioned on the previous measurements. We show that the widely used bisection approach is Info-Greedy for a family of kk-sparse signals by connecting compressed sensing and blackbox complexity of sequential query algorithms, and present Info-Greedy algorithms for Gaussian and Gaussian Mixture Model (GMM) signals, as well as ways to design sparse Info-Greedy measurements. Numerical examples demonstrate the good performance of the proposed algorithms using simulated and real data: Info-Greedy Sensing shows significant improvement over random projection for signals with sparse and low-rank covariance matrices, and adaptivity brings robustness when there is a mismatch between the assumed and the true distributions.Comment: Preliminary results presented at Allerton Conference 2014. To appear in IEEE Journal Selected Topics on Signal Processin

    Structured random measurements in signal processing

    Full text link
    Compressed sensing and its extensions have recently triggered interest in randomized signal acquisition. A key finding is that random measurements provide sparse signal reconstruction guarantees for efficient and stable algorithms with a minimal number of samples. While this was first shown for (unstructured) Gaussian random measurement matrices, applications require certain structure of the measurements leading to structured random measurement matrices. Near optimal recovery guarantees for such structured measurements have been developed over the past years in a variety of contexts. This article surveys the theory in three scenarios: compressed sensing (sparse recovery), low rank matrix recovery, and phaseless estimation. The random measurement matrices to be considered include random partial Fourier matrices, partial random circulant matrices (subsampled convolutions), matrix completion, and phase estimation from magnitudes of Fourier type measurements. The article concludes with a brief discussion of the mathematical techniques for the analysis of such structured random measurements.Comment: 22 pages, 2 figure

    Sparsity Order Estimation from a Single Compressed Observation Vector

    Full text link
    We investigate the problem of estimating the unknown degree of sparsity from compressive measurements without the need to carry out a sparse recovery step. While the sparsity order can be directly inferred from the effective rank of the observation matrix in the multiple snapshot case, this appears to be impossible in the more challenging single snapshot case. We show that specially designed measurement matrices allow to rearrange the measurement vector into a matrix such that its effective rank coincides with the effective sparsity order. In fact, we prove that matrices which are composed of a Khatri-Rao product of smaller matrices generate measurements that allow to infer the sparsity order. Moreover, if some samples are used more than once, one of the matrices needs to be Vandermonde. These structural constraints reduce the degrees of freedom in choosing the measurement matrix which may incur in a degradation in the achievable coherence. We thus also address suitable choices of the measurement matrices. In particular, we analyze Khatri-Rao and Vandermonde matrices in terms of their coherence and provide a new design for Vandermonde matrices that achieves a low coherence

    Consistent Basis Pursuit for Signal and Matrix Estimates in Quantized Compressed Sensing

    Get PDF
    This paper focuses on the estimation of low-complexity signals when they are observed through MM uniformly quantized compressive observations. Among such signals, we consider 1-D sparse vectors, low-rank matrices, or compressible signals that are well approximated by one of these two models. In this context, we prove the estimation efficiency of a variant of Basis Pursuit Denoise, called Consistent Basis Pursuit (CoBP), enforcing consistency between the observations and the re-observed estimate, while promoting its low-complexity nature. We show that the reconstruction error of CoBP decays like M−1/4M^{-1/4} when all parameters but MM are fixed. Our proof is connected to recent bounds on the proximity of vectors or matrices when (i) those belong to a set of small intrinsic "dimension", as measured by the Gaussian mean width, and (ii) they share the same quantized (dithered) random projections. By solving CoBP with a proximal algorithm, we provide some extensive numerical observations that confirm the theoretical bound as MM is increased, displaying even faster error decay than predicted. The same phenomenon is observed in the special, yet important case of 1-bit CS.Comment: Keywords: Quantized compressed sensing, quantization, consistency, error decay, low-rank, sparsity. 10 pages, 3 figures. Note abbout this version: title change, typo corrections, clarification of the context, adding a comparison with BPD
    • …
    corecore