30 research outputs found
-Analysis Minimization and Generalized (Co-)Sparsity: When Does Recovery Succeed?
This paper investigates the problem of signal estimation from undersampled
noisy sub-Gaussian measurements under the assumption of a cosparse model. Based
on generalized notions of sparsity, we derive novel recovery guarantees for the
-analysis basis pursuit, enabling highly accurate predictions of its
sample complexity. The corresponding bounds on the number of required
measurements do explicitly depend on the Gram matrix of the analysis operator
and therefore particularly account for its mutual coherence structure. Our
findings defy conventional wisdom which promotes the sparsity of analysis
coefficients as the crucial quantity to study. In fact, this common paradigm
breaks down completely in many situations of practical interest, for instance,
when applying a redundant (multilevel) frame as analysis prior. By extensive
numerical experiments, we demonstrate that, in contrast, our theoretical
sampling-rate bounds reliably capture the recovery capability of various
examples, such as redundant Haar wavelets systems, total variation, or random
frames. The proofs of our main results build upon recent achievements in the
convex geometry of data mining problems. More precisely, we establish a
sophisticated upper bound on the conic Gaussian mean width that is associated
with the underlying -analysis polytope. Due to a novel localization
argument, it turns out that the presented framework naturally extends to stable
recovery, allowing us to incorporate compressible coefficient sequences as
well
Greedy-Like Algorithms for the Cosparse Analysis Model
The cosparse analysis model has been introduced recently as an interesting
alternative to the standard sparse synthesis approach. A prominent question
brought up by this new construction is the analysis pursuit problem -- the need
to find a signal belonging to this model, given a set of corrupted measurements
of it. Several pursuit methods have already been proposed based on
relaxation and a greedy approach. In this work we pursue this question further,
and propose a new family of pursuit algorithms for the cosparse analysis model,
mimicking the greedy-like methods -- compressive sampling matching pursuit
(CoSaMP), subspace pursuit (SP), iterative hard thresholding (IHT) and hard
thresholding pursuit (HTP). Assuming the availability of a near optimal
projection scheme that finds the nearest cosparse subspace to any vector, we
provide performance guarantees for these algorithms. Our theoretical study
relies on a restricted isometry property adapted to the context of the cosparse
analysis model. We explore empirically the performance of these algorithms by
adopting a plain thresholding projection, demonstrating their good performance
Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)
The implicit objective of the biennial "international - Traveling Workshop on
Interactions between Sparse models and Technology" (iTWIST) is to foster
collaboration between international scientific teams by disseminating ideas
through both specific oral/poster presentations and free discussions. For its
second edition, the iTWIST workshop took place in the medieval and picturesque
town of Namur in Belgium, from Wednesday August 27th till Friday August 29th,
2014. The workshop was conveniently located in "The Arsenal" building within
walking distance of both hotels and town center. iTWIST'14 has gathered about
70 international participants and has featured 9 invited talks, 10 oral
presentations, and 14 posters on the following themes, all related to the
theory, application and generalization of the "sparsity paradigm":
Sparsity-driven data sensing and processing; Union of low dimensional
subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph
sensing/processing; Blind inverse problems and dictionary learning; Sparsity
and computational neuroscience; Information theory, geometry and randomness;
Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?;
Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website:
http://sites.google.com/site/itwist1
Theoretical and Numerical Approaches to Co-/Sparse Recovery in Discrete Tomography
We investigate theoretical and numerical results that guarantee the exact reconstruction of piecewise constant images from insufficient projections in Discrete Tomography. This is often the case in non-destructive quality inspection of industrial objects, made of few homogeneous materials, where fast scanning times do not allow for full sampling. As a consequence, this low number of projections presents us with an underdetermined linear system of equations. We restrict the solution space by requiring that solutions (a) must possess a sparse image gradient, and (b) have constrained pixel values.
To that end, we develop an lower bound, using compressed sensing theory, on the number of measurements required to uniquely recover, by convex programming, an image in our constrained setting. We also develop a second bound, in the non-convex setting, whose novelty is to use the number of connected components when bounding the number of linear measurements for unique reconstruction.
Having established theoretical lower bounds on the number of required measurements, we then examine several optimization models that enforce sparse gradients or restrict the image domain. We provide a novel convex relaxation that is provably tighter than existing models, assuming the target image to be gradient sparse and integer-valued. Given that the number of connected components in an image is critical for unique reconstruction, we provide an integer program model that restricts the maximum number of connected components in the reconstructed image.
When solving the convex models, we view the image domain as a manifold and use tools from differential geometry and optimization on manifolds to develop a first-order multilevel optimization algorithm.
The developed multilevel algorithm exhibits fast convergence and enables us to recover images of higher resolution
Introducing SPAIN (SParse Audio INpainter)
A novel sparsity-based algorithm for audio inpainting is proposed. It is an
adaptation of the SPADE algorithm by Kiti\'c et al., originally developed for
audio declipping, to the task of audio inpainting. The new SPAIN (SParse Audio
INpainter) comes in synthesis and analysis variants. Experiments show that both
A-SPAIN and S-SPAIN outperform other sparsity-based inpainting algorithms.
Moreover, A-SPAIN performs on a par with the state-of-the-art method based on
linear prediction in terms of the SNR, and, for larger gaps, SPAIN is even
slightly better in terms of the PEMO-Q psychoacoustic criterion