112 research outputs found
Generalized SURE for Exponential Families: Applications to Regularization
Stein's unbiased risk estimate (SURE) was proposed by Stein for the
independent, identically distributed (iid) Gaussian model in order to derive
estimates that dominate least-squares (LS). In recent years, the SURE criterion
has been employed in a variety of denoising problems for choosing
regularization parameters that minimize an estimate of the mean-squared error
(MSE). However, its use has been limited to the iid case which precludes many
important applications. In this paper we begin by deriving a SURE counterpart
for general, not necessarily iid distributions from the exponential family.
This enables extending the SURE design technique to a much broader class of
problems. Based on this generalization we suggest a new method for choosing
regularization parameters in penalized LS estimators. We then demonstrate its
superior performance over the conventional generalized cross validation
approach and the discrepancy method in the context of image deblurring and
deconvolution. The SURE technique can also be used to design estimates without
predefining their structure. However, allowing for too many free parameters
impairs the performance of the resulting estimates. To address this inherent
tradeoff we propose a regularized SURE objective. Based on this design
criterion, we derive a wavelet denoising strategy that is similar in sprit to
the standard soft-threshold approach but can lead to improved MSE performance.Comment: to appear in the IEEE Transactions on Signal Processin
A nonlinear Stein based estimator for multichannel image denoising
The use of multicomponent images has become widespread with the improvement
of multisensor systems having increased spatial and spectral resolutions.
However, the observed images are often corrupted by an additive Gaussian noise.
In this paper, we are interested in multichannel image denoising based on a
multiscale representation of the images. A multivariate statistical approach is
adopted to take into account both the spatial and the inter-component
correlations existing between the different wavelet subbands. More precisely,
we propose a new parametric nonlinear estimator which generalizes many reported
denoising methods. The derivation of the optimal parameters is achieved by
applying Stein's principle in the multivariate case. Experiments performed on
multispectral remote sensing images clearly indicate that our method
outperforms conventional wavelet denoising technique
A Non-Local Structure Tensor Based Approach for Multicomponent Image Recovery Problems
Non-Local Total Variation (NLTV) has emerged as a useful tool in variational
methods for image recovery problems. In this paper, we extend the NLTV-based
regularization to multicomponent images by taking advantage of the Structure
Tensor (ST) resulting from the gradient of a multicomponent image. The proposed
approach allows us to penalize the non-local variations, jointly for the
different components, through various matrix norms with .
To facilitate the choice of the hyper-parameters, we adopt a constrained convex
optimization approach in which we minimize the data fidelity term subject to a
constraint involving the ST-NLTV regularization. The resulting convex
optimization problem is solved with a novel epigraphical projection method.
This formulation can be efficiently implemented thanks to the flexibility
offered by recent primal-dual proximal algorithms. Experiments are carried out
for multispectral and hyperspectral images. The results demonstrate the
interest of introducing a non-local structure tensor regularization and show
that the proposed approach leads to significant improvements in terms of
convergence speed over current state-of-the-art methods
Geodesics on the manifold of multivariate generalized Gaussian distributions with an application to multicomponent texture discrimination
We consider the Rao geodesic distance (GD) based on the Fisher information as a similarity measure on the manifold of zero-mean multivariate generalized Gaussian distributions (MGGD). The MGGD is shown to be an adequate model for the heavy-tailed wavelet statistics in multicomponent images, such as color or multispectral images. We discuss the estimation of MGGD parameters using various methods. We apply the GD between MGGDs to color texture discrimination in several classification experiments, taking into account the correlation structure between the spectral bands in the wavelet domain. We compare the performance, both in terms of texture discrimination capability and computational load, of the GD and the Kullback-Leibler divergence (KLD). Likewise, both uni- and multivariate generalized Gaussian models are evaluated, characterized by a fixed or a variable shape parameter. The modeling of the interband correlation significantly improves classification efficiency, while the GD is shown to consistently outperform the KLD as a similarity measure
Monte Carlo-based Noise Compensation in Coil Intensity Corrected Endorectal MRI
Background: Prostate cancer is one of the most common forms of cancer found
in males making early diagnosis important. Magnetic resonance imaging (MRI) has
been useful in visualizing and localizing tumor candidates and with the use of
endorectal coils (ERC), the signal-to-noise ratio (SNR) can be improved. The
coils introduce intensity inhomogeneities and the surface coil intensity
correction built into MRI scanners is used to reduce these inhomogeneities.
However, the correction typically performed at the MRI scanner level leads to
noise amplification and noise level variations. Methods: In this study, we
introduce a new Monte Carlo-based noise compensation approach for coil
intensity corrected endorectal MRI which allows for effective noise
compensation and preservation of details within the prostate. The approach
accounts for the ERC SNR profile via a spatially-adaptive noise model for
correcting non-stationary noise variations. Such a method is useful
particularly for improving the image quality of coil intensity corrected
endorectal MRI data performed at the MRI scanner level and when the original
raw data is not available. Results: SNR and contrast-to-noise ratio (CNR)
analysis in patient experiments demonstrate an average improvement of 11.7 dB
and 11.2 dB respectively over uncorrected endorectal MRI, and provides strong
performance when compared to existing approaches. Conclusions: A new noise
compensation method was developed for the purpose of improving the quality of
coil intensity corrected endorectal MRI data performed at the MRI scanner
level. We illustrate that promising noise compensation performance can be
achieved for the proposed approach, which is particularly important for
processing coil intensity corrected endorectal MRI data performed at the MRI
scanner level and when the original raw data is not available.Comment: 23 page
A SURE Approach for Digital Signal/Image Deconvolution Problems
In this paper, we are interested in the classical problem of restoring data
degraded by a convolution and the addition of a white Gaussian noise. The
originality of the proposed approach is two-fold. Firstly, we formulate the
restoration problem as a nonlinear estimation problem leading to the
minimization of a criterion derived from Stein's unbiased quadratic risk
estimate. Secondly, the deconvolution procedure is performed using any analysis
and synthesis frames that can be overcomplete or not. New theoretical results
concerning the calculation of the variance of the Stein's risk estimate are
also provided in this work. Simulations carried out on natural images show the
good performance of our method w.r.t. conventional wavelet-based restoration
methods
Parameter optimization for local polynomial approximation based intersection confidence interval filter using genetic algorithm: an application for brain MRI image de-noising
Magnetic resonance imaging (MRI) is extensively exploited for more accuratepathological changes as well as diagnosis. Conversely, MRI suffers from variousshortcomings such as ambient noise from the environment, acquisition noise from theequipment, the presence of background tissue, breathing motion, body fat, etc.Consequently, noise reduction is critical as diverse types of the generated noise limit the efficiency of the medical image diagnosis. Local polynomial approximation basedintersection confidence interval (LPA-ICI) filter is one of the effective de-noising filters.This filter requires an adjustment of the ICI parameters for efficient window size selection.From the wide range of ICI parametric values, finding out the best set of tunes values is itselfan optimization problem. The present study proposed a novel technique for parameteroptimization of LPA-ICI filter using genetic algorithm (GA) for brain MR imagesde-noising. The experimental results proved that the proposed method outperforms theLPA-ICI method for de-noising in terms of various performance metrics for different noisevariance levels. Obtained results reports that the ICI parameter values depend on the noisevariance and the concerned under test image
Fuzzy logic-based approach to wavelet denoising of 3D images produced by time-of-flight cameras
In this paper we present a new denoising method for the depth images of a 3D imaging sensor, based on the time-of-flight principle. We propose novel ways to use luminance-like information produced by a time-of flight camera along with depth images. Firstly, we propose a wavelet-based method for estimating the noise level in depth images, using luminance information. The underlying idea is that luminance carries information about the power of the optical signal reflected from the scene and is hence related to the signal-to-noise ratio for every pixel within the depth image. In this way, we can efficiently solve the difficult problem of estimating the non-stationary noise within the depth images. Secondly, we use luminance information to better restore object boundaries masked with noise in the depth images. Information from luminance images is introduced into the estimation formula through the use of fuzzy membership functions. In particular, we take the correlation between the measured depth and luminance into account, and the fact that edges (object boundaries) present in the depth image are likely to occur in the luminance image as well. The results on real 3D images show a significant improvement over the state-of-the-art in the field. (C) 2010 Optical Society of Americ
- …