39 research outputs found
Linear Convergence of Adaptively Iterative Thresholding Algorithms for Compressed Sensing
This paper studies the convergence of the adaptively iterative thresholding
(AIT) algorithm for compressed sensing. We first introduce a generalized
restricted isometry property (gRIP). Then we prove that the AIT algorithm
converges to the original sparse solution at a linear rate under a certain gRIP
condition in the noise free case. While in the noisy case, its convergence rate
is also linear until attaining a certain error bound. Moreover, as by-products,
we also provide some sufficient conditions for the convergence of the AIT
algorithm based on the two well-known properties, i.e., the coherence property
and the restricted isometry property (RIP), respectively. It should be pointed
out that such two properties are special cases of gRIP. The solid improvements
on the theoretical results are demonstrated and compared with the known
results. Finally, we provide a series of simulations to verify the correctness
of the theoretical assertions as well as the effectiveness of the AIT
algorithm.Comment: 15 pages, 5 figure
Limits on Sparse Data Acquisition: RIC Analysis of Finite Gaussian Matrices
One of the key issues in the acquisition of sparse data by means of
compressed sensing (CS) is the design of the measurement matrix. Gaussian
matrices have been proven to be information-theoretically optimal in terms of
minimizing the required number of measurements for sparse recovery. In this
paper we provide a new approach for the analysis of the restricted isometry
constant (RIC) of finite dimensional Gaussian measurement matrices. The
proposed method relies on the exact distributions of the extreme eigenvalues
for Wishart matrices. First, we derive the probability that the restricted
isometry property is satisfied for a given sufficient recovery condition on the
RIC, and propose a probabilistic framework to study both the symmetric and
asymmetric RICs. Then, we analyze the recovery of compressible signals in noise
through the statistical characterization of stability and robustness. The
presented framework determines limits on various sparse recovery algorithms for
finite size problems. In particular, it provides a tight lower bound on the
maximum sparsity order of the acquired data allowing signal recovery with a
given target probability. Also, we derive simple approximations for the RICs
based on the Tracy-Widom distribution.Comment: 11 pages, 6 figures, accepted for publication in IEEE transactions on
information theor
Compressed Sensing: How Sharp Is the Restricted Isometry Property?
Compressed sensing (CS) seeks to recover an unknown vector with N entries by making far fewer than N measurements; it posits that the number of CS measurements should be comparable to the information content of the vector, not simply N. CS combines directly the important task of compression with the measurement task. Since its introduction in 2004ther e have been hundreds of papers on CS, a large fraction of which develop algorithms to recover a signal from its compressed measurements. Because of the paradoxical nature of CS-exact reconstruction from seemingly undersampled measurements-it is crucial for acceptance of an algorithm that rigorous analyses verify the degree of undersampling the algorithm permits. The restricted isometry property (RIP) has become the dominant tool used for the analysis in such cases. We present here an asymmetric form of RIP that gives tighter bounds than the usual symmetric one. We give the best known bounds on the RIP constants for matrices from the Gaussian ensemble. Our derivations illustrate the way in which the combinatorial nature of CS is controlled. Our quantitative bounds on the RIP allow precise statements as to how aggressively a signal can be undersampled, the essential question for practitioners. We also document the extent to which RIP gives precise information about the true performance limits of CS, by comparison with approaches from high-dimensional geometry. © 2011 Society for Industrial and Applied Mathematics
A new and improved quantitative recovery analysis for iterative hard thresholding algorithms in compressed sensing
We present a new recovery analysis for a standard compressed sensing algorithm, Iterative Hard Thresholding (IHT) (Blumensath and Davies, 2008), which considers the fixed points of the algorithm. In the context of arbitrary measurement matrices, we derive a sufficient condition for convergence of IHT to a fixed point and a necessary condition for the existence of fixed points. These conditions allow us to perform a sparse signal recovery analysis in the deterministic noiseless case by implying that the original sparse signal is the unique fixed point and limit point of IHT, and in the case of Gaussian measurement matrices and noise by generating a bound on the approximation error of the IHT limit as a multiple of the noise level. By generalizing the notion of fixed points, we extend our analysis to the variable stepsize Normalised IHT (N-IHT) (Blumensath and Davies, 2010). For both stepsize schemes, we obtain asymptotic phase transitions in a proportional-dimensional framework, quantifying the sparsity/undersampling trade-off for which recovery is guaranteed. Exploiting the reasonable average-case assumption that the underlying signal and measurement matrix are independent, comparison with previous results within this framework shows a substantial quantitative improvement