26 research outputs found

    Morphological Diversity and Sparse Image Denoising

    Get PDF
    International audienceOvercomplete representations are attracting interest in image processing theory, particularly due to their potential to generate sparse representations of data based on their morphological diversity. We here consider a scenario of image denoising using an overcomplete dictionary of sparse linear transforms. Rather than using the basic approach where the denoised image is obtained by simple averaging of denoised estimates provided by each sparse transform, we here develop an elegant bayesian framework to optimally combine the individual estimates. Our derivation of the optimally combined denoiser relies on a scale mixture of gaussian (SMG) prior on the coefficients in each representation transform. Exploiting this prior, we design a bayesian 2-risk (mean field) nonlinear estimator and we derive a closed-form for its expression when the SMG specializes to the Bessel K form prior. Experimental results are carried out to show the striking profits gained from exploiting sparsity of data and their morphological diversity

    Information selection and fusion in vision systems

    Get PDF
    Handling the enormous amounts of data produced by data-intensive imaging systems, such as multi-camera surveillance systems and microscopes, is technically challenging. While image and video compression help to manage the data volumes, they do not address the basic problem of information overflow. In this PhD we tackle the problem in a more drastic way. We select information of interest to a specific vision task, and discard the rest. We also combine data from different sources into a single output product, which presents the information of interest to end users in a suitable, summarized format. We treat two types of vision systems. The first type is conventional light microscopes. During this PhD, we have exploited for the first time the potential of the curvelet transform for image fusion for depth-of-field extension, allowing us to combine the advantages of multi-resolution image analysis for image fusion with increased directional sensitivity. As a result, the proposed technique clearly outperforms state-of-the-art methods, both on real microscopy data and on artificially generated images. The second type is camera networks with overlapping fields of view. To enable joint processing in such networks, inter-camera communication is essential. Because of infrastructure costs, power consumption for wireless transmission, etc., transmitting high-bandwidth video streams between cameras should be avoided. Fortunately, recently designed 'smart cameras', which have on-board processing and communication hardware, allow distributing the required image processing over the cameras. This permits compactly representing useful information from each camera. We focus on representing information for people localization and observation, which are important tools for statistical analysis of room usage, quick localization of people in case of building fires, etc. To further save bandwidth, we select which cameras should be involved in a vision task and transmit observations only from the selected cameras. We provide an information-theoretically founded framework for general purpose camera selection based on the Dempster-Shafer theory of evidence. Applied to tracking, it allows tracking people using a dynamic selection of as little as three cameras with the same accuracy as when using up to ten cameras

    Image Denoising in Mixed Poisson-Gaussian Noise

    Get PDF
    We propose a general methodology (PURE-LET) to design and optimize a wide class of transform-domain thresholding algorithms for denoising images corrupted by mixed Poisson-Gaussian noise. We express the denoising process as a linear expansion of thresholds (LET) that we optimize by relying on a purely data-adaptive unbiased estimate of the mean-squared error (MSE), derived in a non-Bayesian framework (PURE: Poisson-Gaussian unbiased risk estimate). We provide a practical approximation of this theoretical MSE estimate for the tractable optimization of arbitrary transform-domain thresholding. We then propose a pointwise estimator for undecimated filterbank transforms, which consists of subband-adaptive thresholding functions with signal-dependent thresholds that are globally optimized in the image domain. We finally demonstrate the potential of the proposed approach through extensive comparisons with state-of-the-art techniques that are specifically tailored to the estimation of Poisson intensities. We also present denoising results obtained on real images of low-count fluorescence microscopy

    Multiresolution image models and estimation techniques

    Get PDF

    La transformation de Fisz pour l'estimation d'images d'intensité Poissonnienne dans le domaine des ondelettes

    Get PDF
    Nous présentons un nouvel estimateur d'images d'intensité Poissonnienne dans le domaine des ondelettes. Cette méthode est basée sur la normalité asymptotique d'une fonction non-linéaire des coefficients de détail et d'échelle de la transformée de Haar, appelée la transformée de Fisz. Nous exposons quelques résultats asymptotiques, tels que la normalité et la décorrélation des pixels de l'image transformée. Fort de ces résultats, l'image originale bruitée par un processus de Poisson, peut être considérée après transformation de Fisz comme étant contaminée par un bruit Gaussien additif blanc. Ainsi, les débruiteurs classiques s'appliquent directement. Plus exactement, nous appliquons dans le cadre de ce papier un estimateur Bayesien que nous avons récemment développé, utilisant comme a priori une nouvelle classe de distributions, les formes K de Bessel (FKB). Les simulations menées montrent que la transformation de Fisz offre des performances au moins aussi bonnes que les transformations stabilisatrices pour des images d'intensité régulière ou constante par morceaux. Elle dépasse clairement ces approches lorsque le taux de comptage faible. Combiner la transfortmation de Fisz avec le débruiteur Bayesien FKB offre les meilleurs résultats

    Translation-Invariant Shrinkage/Thresholding of Group Sparse Signals

    Full text link
    This paper addresses signal denoising when large-amplitude coefficients form clusters (groups). The L1-norm and other separable sparsity models do not capture the tendency of coefficients to cluster (group sparsity). This work develops an algorithm, called 'overlapping group shrinkage' (OGS), based on the minimization of a convex cost function involving a group-sparsity promoting penalty function. The groups are fully overlapping so the denoising method is translation-invariant and blocking artifacts are avoided. Based on the principle of majorization-minimization (MM), we derive a simple iterative minimization algorithm that reduces the cost function monotonically. A procedure for setting the regularization parameter, based on attenuating the noise to a specified level, is also described. The proposed approach is illustrated on speech enhancement, wherein the OGS approach is applied in the short-time Fourier transform (STFT) domain. The denoised speech produced by OGS does not suffer from musical noise.Comment: 33 pages, 7 figures, 5 table

    Wavelet regression using a LĂ©vy prior model

    No full text
    This thesis is concerned with nonparametric regression and regularization. In particular, wavelet regression using a Lévy prior model is investigated. The use of this prior is motivated by the statistical properties, such as heavy-tails, common in many datasets of interest, such as those in financial time series. The Lévy process we propose captures the heavy tails of the wavelet coefficients of an unknown function. We study the Besov regularity of the wavelet coefficients and establish the connection between the parameters of the Lévy wavelet prior model and Besov spaces. At first, we gave a necessary and sufficient condition such that the realizations of the prior model fall into a certain class of Besov spaces. We show that the tempered stable distribution preserves its functional form for different time scales. We prove that this scaling behaviour can model the exponential-decay-across-scale property of the wavelet coefficients without imposing any specified structure on the coefficients’ energy. We also introduce a Lévy wavelet mixture model to capture the sparseness of the wavelet coefficients. We show that this sparse model exhibits a thresholding rule. We also study the Lévy tempered stable prior model under a Bayesian framework. For the prior specified, we gave a closed form to the posterior Lévy measure of the wavelet coefficients and estimate the hyperparameters of the prior model in both a simulation study and for the S&P 500 time series. We focus on density estimation using a penalized likelihood approach. Primarily, we study the wavelet Tsallis entropy and Fisher information and give closed-form expressions for these measures when the wavelet coefficients are driven by a tempered stable process. Then, we develop an entropic regularization based on the wavelet Tsallis entropy and show that the penalized maximum likelihood method improves the convergence of the estimates

    Wavelet-based noise reduction of cDNA microarray images

    Get PDF
    The advent of microarray imaging technology has lead to enormous progress in the life sciences by allowing scientists to analyze the expression of thousands of genes at a time. For complementary DNA (cDNA) microarray experiments, the raw data are a pair of red and green channel images corresponding to the treatment and control samples. These images are contaminated by a high level of noise due to the numerous noise sources affecting the image formation. A major challenge of microarray image analysis is the extraction of accurate gene expression measurements from the noisy microarray images. A crucial step in this process is denoising, which consists of reducing the noise in the observed microarray images while preserving the signal information as much as possible. This thesis deals with the problem of developing novel methods for reducing noise in cDNA microarray images for accurate estimation of the gene expression levels. Denoising methods based on the wavelet transform have shown significant success when applied to natural images. However, these methods are not very efficient for reducing noise in cDNA microarray images. An important reason for this is that existing methods are only capable of processing the red and green channel images separately. In doing so. they ignore the signal correlation as well as the noise correlation that exists between the wavelet coefficients of the two channels. The primary objective of this research is to design efficient wavelet-based noise reduction algorithms for cDNA microarray images that take into account these inter-channel dependencies by 'jointly' estimating the noise-free coefficients in both the channels. Denoising algorithms are developed using two types of wavelet transforms, namely, the frequently-used discrete wavelet transform (DWT) and the complex wavelet transform (CWT). The main advantage of using the DWT for denoising is that this transform is computationally very efficient. In order to obtain a better denoising performance for microarray images, however, the CWT is preferred to DWT because the former has good directional selectivity properties that are necessary for better representation of the circular edges of spots. The linear minimum mean squared error and maximum a posteriori estimation techniques are used to develop bivariate estimators for the noise-free coefficients of the two images. These estimators are derived by utilizing appropriate joint probability density functions for the image coefficients as well as the noise coefficients of the two channels. Extensive experimentations are carried out on a large set of cDNA microarray images to evaluate the performance of the proposed denoising methods as compared to the existing ones. Comparisons are made using standard metrics such as the peak signal-to-noise ratio (PSNR) for measuring the amount of noise removed from the pixels of the images, and the mean absolute error for measuring the accuracy of the estimated log-intensity ratios obtained from the denoised version of the images. Results indicate that the proposed denoising methods that are developed specifically for the microarray images do, indeed, lead to more accurate estimation of gene expression levels. Thus, it is expected that the proposed methods will play a significant role in improving the reliability of the results obtained from practical microarray experiments

    Parameter estimation for multivariate generalized Gaussian distributions

    Get PDF
    Due to its heavy-tailed and fully parametric form, the multivariate generalized Gaussian distribution (MGGD) has been receiving much attention in signal and image processing applications. Considering the estimation issue of the MGGD parameters, the main contribution of this paper is to prove that the maximum likelihood estimator (MLE) of the scatter matrix exists and is unique up to a scalar factor, for a given shape parameter belonging to the interval [0,1]. Moreover, an estimation algorithm based on a Newton-Raphson recursion is proposed for computing the MLE of MGGD parameters. Various experiments conducted on synthetic and real data are presented to illustrate the theoretical derivations in terms of number of iterations and number of samples for different values of the shape parameter. The main conclusion of this work is that the parameters ofMGGDs can be estimated using the maximum likelihood principle with good performance
    corecore