6,283 research outputs found

    Recovery Guarantees for Rank Aware Pursuits

    Get PDF

    On the stable recovery of the sparsest overcomplete representations in presence of noise

    Full text link
    Let x be a signal to be sparsely decomposed over a redundant dictionary A, i.e., a sparse coefficient vector s has to be found such that x=As. It is known that this problem is inherently unstable against noise, and to overcome this instability, the authors of [Stable Recovery; Donoho et.al., 2006] have proposed to use an "approximate" decomposition, that is, a decomposition satisfying ||x - A s|| < \delta, rather than satisfying the exact equality x = As. Then, they have shown that if there is a decomposition with ||s||_0 < (1+M^{-1})/2, where M denotes the coherence of the dictionary, this decomposition would be stable against noise. On the other hand, it is known that a sparse decomposition with ||s||_0 < spark(A)/2 is unique. In other words, although a decomposition with ||s||_0 < spark(A)/2 is unique, its stability against noise has been proved only for highly more restrictive decompositions satisfying ||s||_0 < (1+M^{-1})/2, because usually (1+M^{-1})/2 << spark(A)/2. This limitation maybe had not been very important before, because ||s||_0 < (1+M^{-1})/2 is also the bound which guaranties that the sparse decomposition can be found via minimizing the L1 norm, a classic approach for sparse decomposition. However, with the availability of new algorithms for sparse decomposition, namely SL0 and Robust-SL0, it would be important to know whether or not unique sparse decompositions with (1+M^{-1})/2 < ||s||_0 < spark(A)/2 are stable. In this paper, we show that such decompositions are indeed stable. In other words, we extend the stability bound from ||s||_0 < (1+M^{-1})/2 to the whole uniqueness range ||s||_0 < spark(A)/2. In summary, we show that "all unique sparse decompositions are stably recoverable". Moreover, we see that sparser decompositions are "more stable".Comment: Accepted in IEEE Trans on SP on 4 May 2010. (c) 2010 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other work

    Matrix Recipes for Hard Thresholding Methods

    Full text link
    In this paper, we present and analyze a new set of low-rank recovery algorithms for linear inverse problems within the class of hard thresholding methods. We provide strategies on how to set up these algorithms via basic ingredients for different configurations to achieve complexity vs. accuracy tradeoffs. Moreover, we study acceleration schemes via memory-based techniques and randomized, Ï”\epsilon-approximate matrix projections to decrease the computational costs in the recovery process. For most of the configurations, we present theoretical analysis that guarantees convergence under mild problem conditions. Simulation results demonstrate notable performance improvements as compared to state-of-the-art algorithms both in terms of reconstruction accuracy and computational complexity.Comment: 26 page

    Relaxed Recovery Conditions for OMP/OLS by Exploiting both Coherence and Decay

    Full text link
    We propose extended coherence-based conditions for exact sparse support recovery using orthogonal matching pursuit (OMP) and orthogonal least squares (OLS). Unlike standard uniform guarantees, we embed some information about the decay of the sparse vector coefficients in our conditions. As a result, the standard condition ÎŒ<1/(2k−1)\mu<1/(2k-1) (where ÎŒ\mu denotes the mutual coherence and kk the sparsity level) can be weakened as soon as the non-zero coefficients obey some decay, both in the noiseless and the bounded-noise scenarios. Furthermore, the resulting condition is approaching ÎŒ<1/k\mu<1/k for strongly decaying sparse signals. Finally, in the noiseless setting, we prove that the proposed conditions, in particular the bound ÎŒ<1/k\mu<1/k, are the tightest achievable guarantees based on mutual coherence

    Constrained Overcomplete Analysis Operator Learning for Cosparse Signal Modelling

    Get PDF
    We consider the problem of learning a low-dimensional signal model from a collection of training samples. The mainstream approach would be to learn an overcomplete dictionary to provide good approximations of the training samples using sparse synthesis coefficients. This famous sparse model has a less well known counterpart, in analysis form, called the cosparse analysis model. In this new model, signals are characterised by their parsimony in a transformed domain using an overcomplete (linear) analysis operator. We propose to learn an analysis operator from a training corpus using a constrained optimisation framework based on L1 optimisation. The reason for introducing a constraint in the optimisation framework is to exclude trivial solutions. Although there is no final answer here for which constraint is the most relevant constraint, we investigate some conventional constraints in the model adaptation field and use the uniformly normalised tight frame (UNTF) for this purpose. We then derive a practical learning algorithm, based on projected subgradients and Douglas-Rachford splitting technique, and demonstrate its ability to robustly recover a ground truth analysis operator, when provided with a clean training set, of sufficient size. We also find an analysis operator for images, using some noisy cosparse signals, which is indeed a more realistic experiment. As the derived optimisation problem is not a convex program, we often find a local minimum using such variational methods. Some local optimality conditions are derived for two different settings, providing preliminary theoretical support for the well-posedness of the learning problem under appropriate conditions.Comment: 29 pages, 13 figures, accepted to be published in TS

    Greedy-Like Algorithms for the Cosparse Analysis Model

    Get PDF
    The cosparse analysis model has been introduced recently as an interesting alternative to the standard sparse synthesis approach. A prominent question brought up by this new construction is the analysis pursuit problem -- the need to find a signal belonging to this model, given a set of corrupted measurements of it. Several pursuit methods have already been proposed based on ℓ1\ell_1 relaxation and a greedy approach. In this work we pursue this question further, and propose a new family of pursuit algorithms for the cosparse analysis model, mimicking the greedy-like methods -- compressive sampling matching pursuit (CoSaMP), subspace pursuit (SP), iterative hard thresholding (IHT) and hard thresholding pursuit (HTP). Assuming the availability of a near optimal projection scheme that finds the nearest cosparse subspace to any vector, we provide performance guarantees for these algorithms. Our theoretical study relies on a restricted isometry property adapted to the context of the cosparse analysis model. We explore empirically the performance of these algorithms by adopting a plain thresholding projection, demonstrating their good performance
    • 

    corecore