911 research outputs found
Analyzing Weighted β_1 Minimization for Sparse Recovery With Nonuniform Sparse Models
In this paper, we introduce a nonuniform sparsity model and analyze the performance of an optimized weighted β_1 minimization over that sparsity model. In particular, we focus on a model where the entries of the unknown vector fall into two sets, with entries of each set having a specific probability of being nonzero. We propose a weighted β_1 minimization recovery algorithm and analyze its performance using a Grassmann angle approach. We compute explicitly the relationship between the system parameters-the weights, the number of measurements, the size of the two sets, the probabilities of being nonzero-so that when i.i.d. random Gaussian measurement matrices are used, the weighted β_1 minimization recovers a randomly selected signal drawn from the considered sparsity model with overwhelming probability as the problem dimension increases. This allows us to compute the optimal weights. We demonstrate through rigorous analysis and simulations that for the case when the support of the signal can be divided into two different subclasses with unequal sparsity fractions, the weighted β_1 minimization outperforms the regular β_1 minimization substantially. We also generalize our results to signal vectors with an arbitrary number of subclasses for sparsity
Weighted -minimization for generalized non-uniform sparse model
Model-based compressed sensing refers to compressed sensing with extra
structure about the underlying sparse signal known a priori. Recent work has
demonstrated that both for deterministic and probabilistic models imposed on
the signal, this extra information can be successfully exploited to enhance
recovery performance. In particular, weighted -minimization with
suitable choice of weights has been shown to improve performance in the so
called non-uniform sparse model of signals. In this paper, we consider a full
generalization of the non-uniform sparse model with very mild assumptions. We
prove that when the measurements are obtained using a matrix with i.i.d
Gaussian entries, weighted -minimization successfully recovers the
sparse signal from its measurements with overwhelming probability. We also
provide a method to choose these weights for any general signal model from the
non-uniform sparse class of signal models.Comment: 32 Page
Numerical Studies of the Generalized \u3cem\u3el\u3c/em\u3eβ Greedy Algorithm for Sparse Signals
The generalized l1 greedy algorithm was recently introduced and used to reconstruct medical images in computerized tomography in the compressed sensing framework via total variation minimization. Experimental results showed that this algorithm is superior to the reweighted l1-minimization and l1 greedy algorithms in reconstructing these medical images. In this paper the effectiveness of the generalized l1 greedy algorithm in finding random sparse signals from underdetermined linear systems is investigated. A series of numerical experiments demonstrate that the generalized l1 greedy algorithm is superior to the reweighted l1-minimization and l1 greedy algorithms in the successful recovery of randomly generated Gaussian sparse signals from data generated by Gaussian random matrices. In particular, the generalized l1 greedy algorithm performs extraordinarily well in recovering random sparse signals with nonzero small entries. The stability of the generalized l1 greedy algorithm with respect to its parameters and the impact of noise on the recovery of Gaussian sparse signals are also studied
Improving the Thresholds of Sparse Recovery: An Analysis of a Two-Step Reweighted Basis Pursuit Algorithm
It is well known that β_1 minimization can be used to recover sufficiently sparse unknown signals from compressed linear measurements. Exact thresholds on the sparsity, as a function of the ratio between the system dimensions, so that with high probability almost all sparse signals can be recovered from independent identically distributed (i.i.d.) Gaussian measurements, have been computed and are referred to as weak thresholds. In this paper, we introduce a reweighted β_1 recovery algorithm composed of two steps: 1) a standard β_1 minimization step to identify a set of entries where the signal is likely to reside and 2) a weighted β_1 minimization step where entries outside this set are penalized. For signals where the non-sparse component entries are independent and identically drawn from certain classes of distributions, (including most well-known continuous distributions), we prove a strict improvement in the weak recovery threshold. Our analysis suggests that the level of improvement in the weak threshold depends on the behavior of the distribution at the origin. Numerical simulations verify the distribution dependence of the threshold improvement very well, and suggest that in the case of i.i.d. Gaussian nonzero entries, the improvement can be quite impressiveβover 20% in the example we consider
- β¦