57 research outputs found
Sparse and Cosparse Audio Dequantization Using Convex Optimization
The paper shows the potential of sparsity-based methods in restoring
quantized signals. Following up on the study of Brauer et al. (IEEE ICASSP
2016), we significantly extend the range of the evaluation scenarios: we
introduce the analysis (cosparse) model, we use more effective algorithms, we
experiment with another time-frequency transform. The paper shows that the
analysis-based model performs comparably to the synthesis-model, but the Gabor
transform produces better results than the originally used cosine transform.
Last but not least, we provide codes and data in a reproducible way
A review of cosparse signal recovery methods applied to sound source localization
National audienceThis work aims at comparing several state-of-the-art methods for cosparse signal recovery, in the context of sound source localization. We assess the performance of ve cosparse recovery algorithms: Greedy Analysis Structured Pursuit, l1 and joint l1,2 minimization, Structured Analysis Iterative Hard Thresholding and Structured Analysis Hard Thresholding Pursuit. In addition, we evaluate the performance of these methods against the sparse synthesis paradigm, solved with corresponding joint l1,2 minimization method. For this evaluation, the chosen applicative showcase is sound source localization from simulated measurements of the acoustic pressure eld.L'objectif de cet article est de comparer plusieurs m ethodes de l' etat de l'art pour la reconstruction coparcimonieuse de signaux, dans le contexte de la localisation de sources sonores. Nous evaluons les performances de cinq algorithmes de reconstruction coparcimonieuse : l'algorithme de "Greedy Analysis Structured Pursuit", les minimisations l1 et l1,2 jointe, ainsi que les algorithmes "Structured Analysis Iterative Hard Thresholding" et "Structured Analysis Hard Thresholding Pursuit". Nous comparons egalement ces algorithmes a l'approche de parcimonie a la synth ese, que nous r esolvons par la minimisation jointe l1,2 correspondante. Nous illustrons nos r esultats dans le cadre d'une application a la localisation de sources sonores, r ealise sur des simulations de mesures de champs de pression acoustique
Greedy-Like Algorithms for the Cosparse Analysis Model
The cosparse analysis model has been introduced recently as an interesting
alternative to the standard sparse synthesis approach. A prominent question
brought up by this new construction is the analysis pursuit problem -- the need
to find a signal belonging to this model, given a set of corrupted measurements
of it. Several pursuit methods have already been proposed based on
relaxation and a greedy approach. In this work we pursue this question further,
and propose a new family of pursuit algorithms for the cosparse analysis model,
mimicking the greedy-like methods -- compressive sampling matching pursuit
(CoSaMP), subspace pursuit (SP), iterative hard thresholding (IHT) and hard
thresholding pursuit (HTP). Assuming the availability of a near optimal
projection scheme that finds the nearest cosparse subspace to any vector, we
provide performance guarantees for these algorithms. Our theoretical study
relies on a restricted isometry property adapted to the context of the cosparse
analysis model. We explore empirically the performance of these algorithms by
adopting a plain thresholding projection, demonstrating their good performance
-Analysis Minimization and Generalized (Co-)Sparsity: When Does Recovery Succeed?
This paper investigates the problem of signal estimation from undersampled
noisy sub-Gaussian measurements under the assumption of a cosparse model. Based
on generalized notions of sparsity, we derive novel recovery guarantees for the
-analysis basis pursuit, enabling highly accurate predictions of its
sample complexity. The corresponding bounds on the number of required
measurements do explicitly depend on the Gram matrix of the analysis operator
and therefore particularly account for its mutual coherence structure. Our
findings defy conventional wisdom which promotes the sparsity of analysis
coefficients as the crucial quantity to study. In fact, this common paradigm
breaks down completely in many situations of practical interest, for instance,
when applying a redundant (multilevel) frame as analysis prior. By extensive
numerical experiments, we demonstrate that, in contrast, our theoretical
sampling-rate bounds reliably capture the recovery capability of various
examples, such as redundant Haar wavelets systems, total variation, or random
frames. The proofs of our main results build upon recent achievements in the
convex geometry of data mining problems. More precisely, we establish a
sophisticated upper bound on the conic Gaussian mean width that is associated
with the underlying -analysis polytope. Due to a novel localization
argument, it turns out that the presented framework naturally extends to stable
recovery, allowing us to incorporate compressible coefficient sequences as
well
A Proper version of Synthesis-based Sparse Audio Declipper
Methods based on sparse representation have found great use in the recovery
of audio signals degraded by clipping. The state of the art in declipping has
been achieved by the SPADE algorithm by Kiti\'c et. al. (LVA/ICA2015). Our
recent study (LVA/ICA2018) has shown that although the original S-SPADE can be
improved such that it converges significantly faster than the A-SPADE, the
restoration quality is significantly worse. In the present paper, we propose a
new version of S-SPADE. Experiments show that the novel version of S-SPADE
outperforms its old version in terms of restoration quality, and that it is
comparable with the A-SPADE while being even slightly faster than A-SPADE
Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)
The implicit objective of the biennial "international - Traveling Workshop on
Interactions between Sparse models and Technology" (iTWIST) is to foster
collaboration between international scientific teams by disseminating ideas
through both specific oral/poster presentations and free discussions. For its
second edition, the iTWIST workshop took place in the medieval and picturesque
town of Namur in Belgium, from Wednesday August 27th till Friday August 29th,
2014. The workshop was conveniently located in "The Arsenal" building within
walking distance of both hotels and town center. iTWIST'14 has gathered about
70 international participants and has featured 9 invited talks, 10 oral
presentations, and 14 posters on the following themes, all related to the
theory, application and generalization of the "sparsity paradigm":
Sparsity-driven data sensing and processing; Union of low dimensional
subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph
sensing/processing; Blind inverse problems and dictionary learning; Sparsity
and computational neuroscience; Information theory, geometry and randomness;
Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?;
Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website:
http://sites.google.com/site/itwist1
Revisiting Synthesis Model of Sparse Audio Declipper
The state of the art in audio declipping has currently been achieved by SPADE
(SParse Audio DEclipper) algorithm by Kiti\'c et al. Until now, the
synthesis/sparse variant, S-SPADE, has been considered significantly slower
than its analysis/cosparse counterpart, A-SPADE. It turns out that the opposite
is true: by exploiting a recent projection lemma, individual iterations of both
algorithms can be made equally computationally expensive, while S-SPADE tends
to require considerably fewer iterations to converge. In this paper, the two
algorithms are compared across a range of parameters such as the window length,
window overlap and redundancy of the transform. The experiments show that
although S-SPADE typically converges faster, the average performance in terms
of restoration quality is not superior to A-SPADE
- âŠ