2,933 research outputs found

    On the stable recovery of the sparsest overcomplete representations in presence of noise

    Full text link
    Let x be a signal to be sparsely decomposed over a redundant dictionary A, i.e., a sparse coefficient vector s has to be found such that x=As. It is known that this problem is inherently unstable against noise, and to overcome this instability, the authors of [Stable Recovery; Donoho et.al., 2006] have proposed to use an "approximate" decomposition, that is, a decomposition satisfying ||x - A s|| < \delta, rather than satisfying the exact equality x = As. Then, they have shown that if there is a decomposition with ||s||_0 < (1+M^{-1})/2, where M denotes the coherence of the dictionary, this decomposition would be stable against noise. On the other hand, it is known that a sparse decomposition with ||s||_0 < spark(A)/2 is unique. In other words, although a decomposition with ||s||_0 < spark(A)/2 is unique, its stability against noise has been proved only for highly more restrictive decompositions satisfying ||s||_0 < (1+M^{-1})/2, because usually (1+M^{-1})/2 << spark(A)/2. This limitation maybe had not been very important before, because ||s||_0 < (1+M^{-1})/2 is also the bound which guaranties that the sparse decomposition can be found via minimizing the L1 norm, a classic approach for sparse decomposition. However, with the availability of new algorithms for sparse decomposition, namely SL0 and Robust-SL0, it would be important to know whether or not unique sparse decompositions with (1+M^{-1})/2 < ||s||_0 < spark(A)/2 are stable. In this paper, we show that such decompositions are indeed stable. In other words, we extend the stability bound from ||s||_0 < (1+M^{-1})/2 to the whole uniqueness range ||s||_0 < spark(A)/2. In summary, we show that "all unique sparse decompositions are stably recoverable". Moreover, we see that sparser decompositions are "more stable".Comment: Accepted in IEEE Trans on SP on 4 May 2010. (c) 2010 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other work

    Identification of Matrices Having a Sparse Representation

    Get PDF
    We consider the problem of recovering a matrix from its action on a known vector in the setting where the matrix can be represented efficiently in a known matrix dictionary. Connections with sparse signal recovery allows for the use of efficient reconstruction techniques such as Basis Pursuit (BP). Of particular interest is the dictionary of time-frequency shift matrices and its role for channel estimation and identification in communications engineering. We present recovery results for BP with the time-frequency shift dictionary and various dictionaries of random matrices

    Compressive and Noncompressive Power Spectral Density Estimation from Periodic Nonuniform Samples

    Get PDF
    This paper presents a novel power spectral density estimation technique for band-limited, wide-sense stationary signals from sub-Nyquist sampled data. The technique employs multi-coset sampling and incorporates the advantages of compressed sensing (CS) when the power spectrum is sparse, but applies to sparse and nonsparse power spectra alike. The estimates are consistent piecewise constant approximations whose resolutions (width of the piecewise constant segments) are controlled by the periodicity of the multi-coset sampling. We show that compressive estimates exhibit better tradeoffs among the estimator's resolution, system complexity, and average sampling rate compared to their noncompressive counterparts. For suitable sampling patterns, noncompressive estimates are obtained as least squares solutions. Because of the non-negativity of power spectra, compressive estimates can be computed by seeking non-negative least squares solutions (provided appropriate sampling patterns exist) instead of using standard CS recovery algorithms. This flexibility suggests a reduction in computational overhead for systems estimating both sparse and nonsparse power spectra because one algorithm can be used to compute both compressive and noncompressive estimates.Comment: 26 pages, single spaced, 9 figure

    Decoding by Linear Programming

    Get PDF
    This paper considers the classical error correcting problem which is frequently discussed in coding theory. We wish to recover an input vector fRnf \in \R^n from corrupted measurements y=Af+ey = A f + e. Here, AA is an mm by nn (coding) matrix and ee is an arbitrary and unknown vector of errors. Is it possible to recover ff exactly from the data yy? We prove that under suitable conditions on the coding matrix AA, the input ff is the unique solution to the 1\ell_1-minimization problem (x1:=ixi\|x\|_{\ell_1} := \sum_i |x_i|) mingRnyAg1 \min_{g \in \R^n} \| y - Ag \|_{\ell_1} provided that the support of the vector of errors is not too large, e0:={i:ei0}ρm\|e\|_{\ell_0} := |\{i : e_i \neq 0\}| \le \rho \cdot m for some ρ>0\rho > 0. In short, ff can be recovered exactly by solving a simple convex optimization problem (which one can recast as a linear program). In addition, numerical experiments suggest that this recovery procedure works unreasonably well; ff is recovered exactly even in situations where a significant fraction of the output is corrupted.Comment: 22 pages, 4 figures, submitte

    Randomized Extended Kaczmarz for Solving Least-Squares

    Full text link
    We present a randomized iterative algorithm that exponentially converges in expectation to the minimum Euclidean norm least squares solution of a given linear system of equations. The expected number of arithmetic operations required to obtain an estimate of given accuracy is proportional to the square condition number of the system multiplied by the number of non-zeros entries of the input matrix. The proposed algorithm is an extension of the randomized Kaczmarz method that was analyzed by Strohmer and Vershynin.Comment: 19 Pages, 5 figures; code is available at https://github.com/zouzias/RE

    RSP-Based Analysis for Sparsest and Least 1\ell_1-Norm Solutions to Underdetermined Linear Systems

    Full text link
    Recently, the worse-case analysis, probabilistic analysis and empirical justification have been employed to address the fundamental question: When does 1\ell_1-minimization find the sparsest solution to an underdetermined linear system? In this paper, a deterministic analysis, rooted in the classic linear programming theory, is carried out to further address this question. We first identify a necessary and sufficient condition for the uniqueness of least 1\ell_1-norm solutions to linear systems. From this condition, we deduce that a sparsest solution coincides with the unique least 1\ell_1-norm solution to a linear system if and only if the so-called \emph{range space property} (RSP) holds at this solution. This yields a broad understanding of the relationship between 0\ell_0- and 1\ell_1-minimization problems. Our analysis indicates that the RSP truly lies at the heart of the relationship between these two problems. Through RSP-based analysis, several important questions in this field can be largely addressed. For instance, how to efficiently interpret the gap between the current theory and the actual numerical performance of 1\ell_1-minimization by a deterministic analysis, and if a linear system has multiple sparsest solutions, when does 1\ell_1-minimization guarantee to find one of them? Moreover, new matrix properties (such as the \emph{RSP of order KK} and the \emph{Weak-RSP of order KK}) are introduced in this paper, and a new theory for sparse signal recovery based on the RSP of order KK is established
    corecore