2,715 research outputs found

    Compressive Sensing of Analog Signals Using Discrete Prolate Spheroidal Sequences

    Full text link
    Compressive sensing (CS) has recently emerged as a framework for efficiently capturing signals that are sparse or compressible in an appropriate basis. While often motivated as an alternative to Nyquist-rate sampling, there remains a gap between the discrete, finite-dimensional CS framework and the problem of acquiring a continuous-time signal. In this paper, we attempt to bridge this gap by exploiting the Discrete Prolate Spheroidal Sequences (DPSS's), a collection of functions that trace back to the seminal work by Slepian, Landau, and Pollack on the effects of time-limiting and bandlimiting operations. DPSS's form a highly efficient basis for sampled bandlimited functions; by modulating and merging DPSS bases, we obtain a dictionary that offers high-quality sparse approximations for most sampled multiband signals. This multiband modulated DPSS dictionary can be readily incorporated into the CS framework. We provide theoretical guarantees and practical insight into the use of this dictionary for recovery of sampled multiband signals from compressive measurements

    User-Friendly Covariance Estimation for Heavy-Tailed Distributions

    Get PDF
    We offer a survey of recent results on covariance estimation for heavy-tailed distributions. By unifying ideas scattered in the literature, we propose user-friendly methods that facilitate practical implementation. Specifically, we introduce element-wise and spectrum-wise truncation operators, as well as their MM-estimator counterparts, to robustify the sample covariance matrix. Different from the classical notion of robustness that is characterized by the breakdown property, we focus on the tail robustness which is evidenced by the connection between nonasymptotic deviation and confidence level. The key observation is that the estimators needs to adapt to the sample size, dimensionality of the data and the noise level to achieve optimal tradeoff between bias and robustness. Furthermore, to facilitate their practical use, we propose data-driven procedures that automatically calibrate the tuning parameters. We demonstrate their applications to a series of structured models in high dimensions, including the bandable and low-rank covariance matrices and sparse precision matrices. Numerical studies lend strong support to the proposed methods.Comment: 56 pages, 2 figure

    Approximate Message Passing with a Colored Aliasing Model for Variable Density Fourier Sampled Images

    Full text link
    The Approximate Message Passing (AMP) algorithm efficiently reconstructs signals which have been sampled with large i.i.d. sub-Gaussian sensing matrices. Central to AMP is its "state evolution", which guarantees that the difference between the current estimate and ground truth (the "aliasing") at every iteration obeys a Gaussian distribution that can be fully characterized by a scalar. However, when Fourier coefficients of a signal with non-uniform spectral density are sampled, such as in Magnetic Resonance Imaging (MRI), the aliasing is intrinsically colored, AMP's scalar state evolution is no longer accurate and the algorithm encounters convergence problems. In response, we propose the Variable Density Approximate Message Passing (VDAMP) algorithm, which uses the wavelet domain to model the colored aliasing. We present empirical evidence that VDAMP obeys a "colored state evolution", where the aliasing obeys a Gaussian distribution that can be fully characterized with one scalar per wavelet subband. A benefit of state evolution is that Stein's Unbiased Risk Estimate (SURE) can be effectively implemented, yielding an algorithm with subband-dependent thresholding that has no free parameters. We empirically evaluate the effectiveness of VDAMP on three variations of Fast Iterative Shrinkage-Thresholding (FISTA) and find that it converges in around 10 times fewer iterations on average than the next-fastest method, and to a comparable mean-squared-error.Comment: 13 pages, 7 figures, 3 tables. arXiv admin note: text overlap with arXiv:1911.0123

    Robust Principal Component Analysis?

    Full text link
    This paper is about a curious phenomenon. Suppose we have a data matrix, which is the superposition of a low-rank component and a sparse component. Can we recover each component individually? We prove that under some suitable assumptions, it is possible to recover both the low-rank and the sparse components exactly by solving a very convenient convex program called Principal Component Pursuit; among all feasible decompositions, simply minimize a weighted combination of the nuclear norm and of the L1 norm. This suggests the possibility of a principled approach to robust principal component analysis since our methodology and results assert that one can recover the principal components of a data matrix even though a positive fraction of its entries are arbitrarily corrupted. This extends to the situation where a fraction of the entries are missing as well. We discuss an algorithm for solving this optimization problem, and present applications in the area of video surveillance, where our methodology allows for the detection of objects in a cluttered background, and in the area of face recognition, where it offers a principled way of removing shadows and specularities in images of faces
    corecore