9 research outputs found

    On Modified l_1-Minimization Problems in Compressed Sensing

    Get PDF
    Sparse signal modeling has received much attention recently because of its application in medical imaging, group testing and radar technology, among others. Compressed sensing, a recently coined term, has showed us, both in theory and practice, that various signals of interest which are sparse or approximately sparse can be efficiently recovered by using far fewer samples than suggested by Shannon sampling theorem. Sparsity is the only prior information about an unknown signal assumed in traditional compressed sensing techniques. But in many applications, other kinds of prior information are also available, such as partial knowledge of the support, tree structure of signal and clustering of large coefficients around a small set of coefficients. In this thesis, we consider compressed sensing problems with prior information on the support of the signal, together with sparsity. We modify regular l_1 -minimization problems considered in compressed sensing, using this extra information. We call these modified l_1 -minimization problems. We show that partial knowledge of the support helps us to weaken sufficient conditions for the recovery of sparse signals using modified ` 1 minimization problems. In case of deterministic compressed sensing, we show that a sharp condition for sparse recovery can be improved using modified ` 1 minimization problems. We also derive algebraic necessary and sufficient condition for modified basis pursuit problem and use an open source algorithm known as l_1 -homotopy algorithm to perform some numerical experiments and compare the performance of modified Basis Pursuit Denoising with the regular Basis Pursuit Denoising

    Sparse and low rank signal recovery with partial knowledge

    Get PDF
    In the first part of this work, we study sparse recovery problem in the presence of bounded noise. We obtain performance guarantees for modified-CS and for its improved version, modified-CS-Add-LS-Del, for recursive reconstruction of a time sequence of sparse signals from a reduced set of noisy measurements available at each time. Under mild assumptions, we show that the support recovery error and reconstruction error of both algorithms are bounded by a time-invariant and small value at all times. In the second part of this work, we study batch sparse recovery problem in the presence of large and low rank noise, which is also known as the problem of Robust Principal Components Analysis (RPCA). In recent work, RPCA has been posed as a problem of recovering a low-rank matrix \Lb and a sparse matrix \Sb from their sum, \M:= \Lb + \Sb and a provably exact convex optimization solution called PCP has been proposed. We study the following problem. Assume that we have a partial estimate of the column space of the low rank matrix \Lb, we propose here a simple but useful modification of the PCP idea, called modified-PCP, that allows us to use this knowledge. We derive its correctness result which shows that modified-PCP indeed requires significantly weaker incoherence assumptions than PCP, when the available subspace knowledge is accurate. In the third part of this work, we study the ``online sparse recovery problem in the presence of low rank noise and bounded noise, which is also known as the ``online RPCA problem. Here we study a more general version of this problem, where the goal is to recover low rank matrix \Lb and sparse matrix \Sb from \M:=\Lb + \Sb + \W and \W is the matrix of unstructured small noise. We develop and study a novel ``online RPCA algorithm based on the recently introduced Recursive Projected Compressive Sensing (ReProCS) framework. The key contribution is a correctness result for this algorithm under relatively mild assumptions

    Sparsity optimization and RRSP-based theory far l-bit compressive sensing

    Get PDF
    Due to the fact that only a few significant components can capture the key information of the signal, acquiring a sparse representation of the signal can be interpreted as finding a sparsest solution to an underdetermined system of linear equations. Theoretical results obtained from studying the sparsest solution to a system of linear equations provide the foundation for many practical problems in signal and image processing, sample theory, statistical and machine learning, and error correction. The first contribution of this thesis is the development of sufficient conditions for the uniqueness of solutions of the partial l0_0-minimization, where only a part of the solution is sparse. In particular, l0_0-minimization is a special case of the partial l0_0-minimization. To study and develop uniqueness conditions for the partial sparsest solution, some concepts, such as lp_p-induced quasi-norm, maximal scaled spark and maximal scaled mutual coherence, are introduced. The main contribution of this thesis is the development of a framework for l-bit compressive sensing and the restricted range space property based support recovery theories. The l-bit compressive sensing is an extreme case of compressive sensing. We show that such a l-bit framework can be reformulated equivalently as an l0_0-minimization with linear equality and inequality constraints. We establish a decoding method, so-called l-bit basis pursuit, to possibly attack this l-bit l0_0-minimization problem. The support recovery theories via l-bit basis pursuit have been developed through the restricted range space property of transposed sensing matrices. In the last part of this thesis, we study the numerical performance of l-bit basis pursuit. We present simulation results to demonstrate that l-bit basis pursuit achieves support recovery, approximate sparse recovery and cardinality recovery with Gaussian matrices and Bernoulli matrices. It is not necessary to require that the sensing matrix be underdetermined due to the single-bit per measurement assumption. Furthermore, we introduce the truncated l-bit measurements method and the reweighted l-bit l1_1-minimization method to further enhance the numerical performance of l-bit basis pursuit

    SUBMISSION TO IEEE SIGNAL PROCESSING LETTERS 1 On Partial Sparse Recovery

    No full text
    Abstract—We consider the problem of recovering a partially sparse solution of an underdetermined system of linear equations by minimizing the ℓ1-norm of the part of the solution vector which is known to be sparse. Such a problem is closely related to a classical problem in Compressed Sensing where the ℓ1-norm of the whole solution vector is minimized. We introduce analogues of restricted isometry and null space properties for the recovery of partially sparse vectors and show that these new properties are implied by their original counterparts. We show also how to extend recovery under noisy measurements to the partially sparse case. Index Terms—Partial sparse recovery, compressed sensing, ℓ1minimization, Sparse quadratic polynomial interpolation. I
    corecore