7,334 research outputs found

    Numerical Studies of the Generalized \u3cem\u3el\u3c/em\u3e₁ Greedy Algorithm for Sparse Signals

    Get PDF
    The generalized l1 greedy algorithm was recently introduced and used to reconstruct medical images in computerized tomography in the compressed sensing framework via total variation minimization. Experimental results showed that this algorithm is superior to the reweighted l1-minimization and l1 greedy algorithms in reconstructing these medical images. In this paper the effectiveness of the generalized l1 greedy algorithm in finding random sparse signals from underdetermined linear systems is investigated. A series of numerical experiments demonstrate that the generalized l1 greedy algorithm is superior to the reweighted l1-minimization and l1 greedy algorithms in the successful recovery of randomly generated Gaussian sparse signals from data generated by Gaussian random matrices. In particular, the generalized l1 greedy algorithm performs extraordinarily well in recovering random sparse signals with nonzero small entries. The stability of the generalized l1 greedy algorithm with respect to its parameters and the impact of noise on the recovery of Gaussian sparse signals are also studied

    A Simple Gaussian Measurement Bound for Exact Recovery of Block-Sparse Signals

    Get PDF
    We present a probabilistic analysis on conditions of the exact recovery of block-sparse signals whose nonzero elements appear in fixed blocks. We mainly derive a simple lower bound on the necessary number of Gaussian measurements for exact recovery of such block-sparse signals via the mixed l2/lq  (0<q≀1) norm minimization method. In addition, we present numerical examples to partially support the correctness of the theoretical results. The obtained results extend those known for the standard lq minimization and the mixed l2/l1 minimization methods to the mixed l2/lq  (0<q≀1) minimization method in the context of block-sparse signal recovery

    Sparsity-based Recovery of Finite Alphabet Solutions to Underdetermined Linear Systems

    No full text
    International audienceWe consider the problem of estimating a deterministic finite alphabet vector f from underdetermined measurements y = A f, where A is a given (random) n x M matrix. Two new convex optimization methods are introduced for the recovery of finite alphabet signals via l1-norm minimization. The first method is based on regularization. In the second approach, the problem is formulated as the recovery of sparse signals after a suitable sparse transform. The regularization-based method is less complex than the transform-based one. When the alphabet size pp equals 2 and (n,N) grows proportionally, the conditions under which the signal will be recovered with high probability are the same for the two methods. When p > 2, the behavior of the transform-based method is established. Experimental results support this theoretical result and show that the transform method outperforms the regularization-based one
    • 

    corecore