18 research outputs found

    On the Effective Measure of Dimension in the Analysis Cosparse Model

    Full text link
    Many applications have benefited remarkably from low-dimensional models in the recent decade. The fact that many signals, though high dimensional, are intrinsically low dimensional has given the possibility to recover them stably from a relatively small number of their measurements. For example, in compressed sensing with the standard (synthesis) sparsity prior and in matrix completion, the number of measurements needed is proportional (up to a logarithmic factor) to the signal's manifold dimension. Recently, a new natural low-dimensional signal model has been proposed: the cosparse analysis prior. In the noiseless case, it is possible to recover signals from this model, using a combinatorial search, from a number of measurements proportional to the signal's manifold dimension. However, if we ask for stability to noise or an efficient (polynomial complexity) solver, all the existing results demand a number of measurements which is far removed from the manifold dimension, sometimes far greater. Thus, it is natural to ask whether this gap is a deficiency of the theory and the solvers, or if there exists a real barrier in recovering the cosparse signals by relying only on their manifold dimension. Is there an algorithm which, in the presence of noise, can accurately recover a cosparse signal from a number of measurements proportional to the manifold dimension? In this work, we prove that there is no such algorithm. Further, we show through numerical simulations that even in the noiseless case convex relaxations fail when the number of measurements is comparable to the manifold dimension. This gives a practical counter-example to the growing literature on compressed acquisition of signals based on manifold dimension.Comment: 19 pages, 6 figure

    Sampling in the Analysis Transform Domain

    Full text link
    Many signal and image processing applications have benefited remarkably from the fact that the underlying signals reside in a low dimensional subspace. One of the main models for such a low dimensionality is the sparsity one. Within this framework there are two main options for the sparse modeling: the synthesis and the analysis ones, where the first is considered the standard paradigm for which much more research has been dedicated. In it the signals are assumed to have a sparse representation under a given dictionary. On the other hand, in the analysis approach the sparsity is measured in the coefficients of the signal after applying a certain transformation, the analysis dictionary, on it. Though several algorithms with some theory have been developed for this framework, they are outnumbered by the ones proposed for the synthesis methodology. Given that the analysis dictionary is either a frame or the two dimensional finite difference operator, we propose a new sampling scheme for signals from the analysis model that allows to recover them from their samples using any existing algorithm from the synthesis model. The advantage of this new sampling strategy is that it makes the existing synthesis methods with their theory also available for signals from the analysis framework.Comment: 13 Pages, 2 figure

    Online performance guarantees for sparse recovery

    Get PDF
    A K*-sparse vector x* ∈ RN produces measurements via linear dimensionality reduction as u = Φx* +n, where Φ ∈ RM×N (M <; N), and n ∈ RM consists of independent and identically distributed, zero mean Gaussian entries with variance σ2. An algorithm, after its execution, determines a vector x̃ that has K-nonzero entries, and satisfies ||u - Φx̃|| ≤ ϵ. How far can x̃ be from x*? When the measurement matrix Φ provides stable embedding to 2K-sparse signals (the so-called restricted isometry property), they must be very close. This paper therefore establishes worst-case bounds to characterize the distance ||x̃- x*|| based on the online meta information. These bounds improve the pre-run algorithmic recovery guarantees, and are quite useful in exploring various data error and solution sparsity trade-offs. We also evaluate the performance of some sparse recovery algorithms in the context of our bound

    Oracle-order Recovery Performance of Greedy Pursuits with Replacement against General Perturbations

    Full text link
    Applying the theory of compressive sensing in practice always takes different kinds of perturbations into consideration. In this paper, the recovery performance of greedy pursuits with replacement for sparse recovery is analyzed when both the measurement vector and the sensing matrix are contaminated with additive perturbations. Specifically, greedy pursuits with replacement include three algorithms, compressive sampling matching pursuit (CoSaMP), subspace pursuit (SP), and iterative hard thresholding (IHT), where the support estimation is evaluated and updated in each iteration. Based on restricted isometry property, a unified form of the error bounds of these recovery algorithms is derived under general perturbations for compressible signals. The results reveal that the recovery performance is stable against both perturbations. In addition, these bounds are compared with that of oracle recovery--- least squares solution with the locations of some largest entries in magnitude known a priori. The comparison shows that the error bounds of these algorithms only differ in coefficients from the lower bound of oracle recovery for some certain signal and perturbations, as reveals that oracle-order recovery performance of greedy pursuits with replacement is guaranteed. Numerical simulations are performed to verify the conclusions.Comment: 27 pages, 4 figures, 5 table

    Near Oracle Performance and Block Analysis of Signal Space Greedy Methods

    Get PDF
    Compressive sampling (CoSa) is a new methodology which demonstrates that sparse signals can be recovered from a small number of linear measurements. Greedy algorithms like CoSaMP have been designed for this recovery, and variants of these methods have been adapted to the case where sparsity is with respect to some arbitrary dictionary rather than an orthonormal basis. In this work we present an analysis of the so-called Signal Space CoSaMP method when the measurements are corrupted with mean-zero white Gaussian noise. We establish near-oracle performance for recovery of signals sparse in some arbitrary dictionary. In addition, we analyze the block variant of the method for signals whose support obey a block structure, extending the method into the model-based compressed sensing framework. Numerical experiments confirm that the block method significantly outperforms the standard method in these settings
    corecore