2,649 research outputs found
Denoising strategies for general finite frames
Overcomplete representations such as wavelets and windowed Fourier expansions have become mainstays of modern statistical data analysis. In the present work, in the context of general finite frames, we derive an oracle expression for the mean quadratic risk of a linear diagonal de-noising procedure which immediately yields the optimal linear diagonal estimator. Moreover, we obtain an expression for an unbiased estimator of the risk of any smooth shrinkage rule. This last result motivates a set of practical estimation procedures for general finite frames that can be viewed as the generalization of the classical procedures for orthonormal bases. A simulation study verifies the effectiveness of the proposed procedures with respect to the classical ones and confirms that the correlations induced by frame structure should be explicitly treated to yield an improvement in estimation precision
A Hierarchical Bayesian Model for Frame Representation
In many signal processing problems, it may be fruitful to represent the
signal under study in a frame. If a probabilistic approach is adopted, it
becomes then necessary to estimate the hyper-parameters characterizing the
probability distribution of the frame coefficients. This problem is difficult
since in general the frame synthesis operator is not bijective. Consequently,
the frame coefficients are not directly observable. This paper introduces a
hierarchical Bayesian model for frame representation. The posterior
distribution of the frame coefficients and model hyper-parameters is derived.
Hybrid Markov Chain Monte Carlo algorithms are subsequently proposed to sample
from this posterior distribution. The generated samples are then exploited to
estimate the hyper-parameters and the frame coefficients of the target signal.
Validation experiments show that the proposed algorithms provide an accurate
estimation of the frame coefficients and hyper-parameters. Application to
practical problems of image denoising show the impact of the resulting Bayesian
estimation on the recovered signal quality
Rehaussement du signal de parole par EMD et opérateur de Teager-Kaiser
The authors would like to thank Professor Mohamed Bahoura from Universite de Quebec a Rimouski for fruitful discussions on time adaptive thresholdingIn this paper a speech denoising strategy based on time adaptive thresholding of intrinsic modes functions (IMFs) of the signal, extracted by empirical mode decomposition (EMD), is introduced. The denoised signal is reconstructed by the superposition of its adaptive thresholded IMFs. Adaptive thresholds are estimated using the TeagerâKaiser energy operator (TKEO) of signal IMFs. More precisely, TKEO identifies the type of frame by expanding differences between speech and non-speech frames in each IMF. Based on the EMD, the proposed speech denoising scheme is a fully data-driven approach. The method is tested on speech signals with different noise levels and the results are compared to EMD-shrinkage and wavelet transform (WT) coupled with TKEO. Speech enhancement performance is evaluated using output signal to noise ratio (SNR) and perceptual evaluation of speech quality (PESQ) measure. Based on the analyzed speech signals, the proposed enhancement scheme performs better than WT-TKEO and EMD-shrinkage approaches in terms of output SNR and PESQ. The noise is greatly reduced using time-adaptive thresholding than universal thresholding. The study is limited to signals corrupted by additive white Gaussian noise
Sparse Modeling for Image and Vision Processing
In recent years, a large amount of multi-disciplinary research has been
conducted on sparse models and their applications. In statistics and machine
learning, the sparsity principle is used to perform model selection---that is,
automatically selecting a simple model among a large collection of them. In
signal processing, sparse coding consists of representing data with linear
combinations of a few dictionary elements. Subsequently, the corresponding
tools have been widely adopted by several scientific communities such as
neuroscience, bioinformatics, or computer vision. The goal of this monograph is
to offer a self-contained view of sparse modeling for visual recognition and
image processing. More specifically, we focus on applications where the
dictionary is learned and adapted to data, yielding a compact representation
that has been successful in various contexts.Comment: 205 pages, to appear in Foundations and Trends in Computer Graphics
and Visio
- âŠ