47 research outputs found
-Analysis Minimization and Generalized (Co-)Sparsity: When Does Recovery Succeed?
This paper investigates the problem of signal estimation from undersampled
noisy sub-Gaussian measurements under the assumption of a cosparse model. Based
on generalized notions of sparsity, we derive novel recovery guarantees for the
-analysis basis pursuit, enabling highly accurate predictions of its
sample complexity. The corresponding bounds on the number of required
measurements do explicitly depend on the Gram matrix of the analysis operator
and therefore particularly account for its mutual coherence structure. Our
findings defy conventional wisdom which promotes the sparsity of analysis
coefficients as the crucial quantity to study. In fact, this common paradigm
breaks down completely in many situations of practical interest, for instance,
when applying a redundant (multilevel) frame as analysis prior. By extensive
numerical experiments, we demonstrate that, in contrast, our theoretical
sampling-rate bounds reliably capture the recovery capability of various
examples, such as redundant Haar wavelets systems, total variation, or random
frames. The proofs of our main results build upon recent achievements in the
convex geometry of data mining problems. More precisely, we establish a
sophisticated upper bound on the conic Gaussian mean width that is associated
with the underlying -analysis polytope. Due to a novel localization
argument, it turns out that the presented framework naturally extends to stable
recovery, allowing us to incorporate compressible coefficient sequences as
well
On the Effective Measure of Dimension in the Analysis Cosparse Model
Many applications have benefited remarkably from low-dimensional models in
the recent decade. The fact that many signals, though high dimensional, are
intrinsically low dimensional has given the possibility to recover them stably
from a relatively small number of their measurements. For example, in
compressed sensing with the standard (synthesis) sparsity prior and in matrix
completion, the number of measurements needed is proportional (up to a
logarithmic factor) to the signal's manifold dimension.
Recently, a new natural low-dimensional signal model has been proposed: the
cosparse analysis prior. In the noiseless case, it is possible to recover
signals from this model, using a combinatorial search, from a number of
measurements proportional to the signal's manifold dimension. However, if we
ask for stability to noise or an efficient (polynomial complexity) solver, all
the existing results demand a number of measurements which is far removed from
the manifold dimension, sometimes far greater. Thus, it is natural to ask
whether this gap is a deficiency of the theory and the solvers, or if there
exists a real barrier in recovering the cosparse signals by relying only on
their manifold dimension. Is there an algorithm which, in the presence of
noise, can accurately recover a cosparse signal from a number of measurements
proportional to the manifold dimension? In this work, we prove that there is no
such algorithm. Further, we show through numerical simulations that even in the
noiseless case convex relaxations fail when the number of measurements is
comparable to the manifold dimension. This gives a practical counter-example to
the growing literature on compressed acquisition of signals based on manifold
dimension.Comment: 19 pages, 6 figure
Sampling in the Analysis Transform Domain
Many signal and image processing applications have benefited remarkably from
the fact that the underlying signals reside in a low dimensional subspace. One
of the main models for such a low dimensionality is the sparsity one. Within
this framework there are two main options for the sparse modeling: the
synthesis and the analysis ones, where the first is considered the standard
paradigm for which much more research has been dedicated. In it the signals are
assumed to have a sparse representation under a given dictionary. On the other
hand, in the analysis approach the sparsity is measured in the coefficients of
the signal after applying a certain transformation, the analysis dictionary, on
it. Though several algorithms with some theory have been developed for this
framework, they are outnumbered by the ones proposed for the synthesis
methodology.
Given that the analysis dictionary is either a frame or the two dimensional
finite difference operator, we propose a new sampling scheme for signals from
the analysis model that allows to recover them from their samples using any
existing algorithm from the synthesis model. The advantage of this new sampling
strategy is that it makes the existing synthesis methods with their theory also
available for signals from the analysis framework.Comment: 13 Pages, 2 figure
Greedy-Like Algorithms for the Cosparse Analysis Model
The cosparse analysis model has been introduced recently as an interesting
alternative to the standard sparse synthesis approach. A prominent question
brought up by this new construction is the analysis pursuit problem -- the need
to find a signal belonging to this model, given a set of corrupted measurements
of it. Several pursuit methods have already been proposed based on
relaxation and a greedy approach. In this work we pursue this question further,
and propose a new family of pursuit algorithms for the cosparse analysis model,
mimicking the greedy-like methods -- compressive sampling matching pursuit
(CoSaMP), subspace pursuit (SP), iterative hard thresholding (IHT) and hard
thresholding pursuit (HTP). Assuming the availability of a near optimal
projection scheme that finds the nearest cosparse subspace to any vector, we
provide performance guarantees for these algorithms. Our theoretical study
relies on a restricted isometry property adapted to the context of the cosparse
analysis model. We explore empirically the performance of these algorithms by
adopting a plain thresholding projection, demonstrating their good performance