14,032 research outputs found
Multiplicative Noise Removal Using Variable Splitting and Constrained Optimization
Multiplicative noise (also known as speckle noise) models are central to the
study of coherent imaging systems, such as synthetic aperture radar and sonar,
and ultrasound and laser imaging. These models introduce two additional layers
of difficulties with respect to the standard Gaussian additive noise scenario:
(1) the noise is multiplied by (rather than added to) the original image; (2)
the noise is not Gaussian, with Rayleigh and Gamma being commonly used
densities. These two features of multiplicative noise models preclude the
direct application of most state-of-the-art algorithms, which are designed for
solving unconstrained optimization problems where the objective has two terms:
a quadratic data term (log-likelihood), reflecting the additive and Gaussian
nature of the noise, plus a convex (possibly nonsmooth) regularizer (e.g., a
total variation or wavelet-based regularizer/prior). In this paper, we address
these difficulties by: (1) converting the multiplicative model into an additive
one by taking logarithms, as proposed by some other authors; (2) using variable
splitting to obtain an equivalent constrained problem; and (3) dealing with
this optimization problem using the augmented Lagrangian framework. A set of
experiments shows that the proposed method, which we name MIDAL (multiplicative
image denoising by augmented Lagrangian), yields state-of-the-art results both
in terms of speed and denoising performance.Comment: 11 pages, 7 figures, 2 tables. To appear in the IEEE Transactions on
Image Processing
Multiplicative Noise Removal Using L1 Fidelity on Frame Coefficients
We address the denoising of images contaminated with multiplicative noise,
e.g. speckle noise. Classical ways to solve such problems are filtering,
statistical (Bayesian) methods, variational methods, and methods that convert
the multiplicative noise into additive noise (using a logarithmic function),
shrinkage of the coefficients of the log-image data in a wavelet basis or in a
frame, and transform back the result using an exponential function. We propose
a method composed of several stages: we use the log-image data and apply a
reasonable under-optimal hard-thresholding on its curvelet transform; then we
apply a variational method where we minimize a specialized criterion composed
of an data-fitting to the thresholded coefficients and a Total
Variation regularization (TV) term in the image domain; the restored image is
an exponential of the obtained minimizer, weighted in a way that the mean of
the original image is preserved. Our restored images combine the advantages of
shrinkage and variational methods and avoid their main drawbacks. For the
minimization stage, we propose a properly adapted fast minimization scheme
based on Douglas-Rachford splitting. The existence of a minimizer of our
specialized criterion being proven, we demonstrate the convergence of the
minimization scheme. The obtained numerical results outperform the main
alternative methods
DoPAMINE: Double-sided Masked CNN for Pixel Adaptive Multiplicative Noise Despeckling
We propose DoPAMINE, a new neural network based multiplicative noise
despeckling algorithm. Our algorithm is inspired by Neural AIDE (N-AIDE), which
is a recently proposed neural adaptive image denoiser. While the original
N-AIDE was designed for the additive noise case, we show that the same
framework, i.e., adaptively learning a network for pixel-wise affine denoisers
by minimizing an unbiased estimate of MSE, can be applied to the multiplicative
noise case as well. Moreover, we derive a double-sided masked CNN architecture
which can control the variance of the activation values in each layer and
converge fast to high denoising performance during supervised training. In the
experimental results, we show our DoPAMINE possesses high adaptivity via
fine-tuning the network parameters based on the given noisy image and achieves
significantly better despeckling results compared to SAR-DRN, a
state-of-the-art CNN-based algorithm.Comment: AAAI 2019 Camera Ready Versio
Smoothing dynamic positron emission tomography time courses using functional principal components
A functional smoothing approach to the analysis of PET time course data is presented. By borrowing information across space and accounting for this pooling through the use of a nonparametric covariate adjustment, it is possible to smooth the PET time course data thus reducing the noise. A new model for functional data analysis, the Multiplicative Nonparametric Random Effects Model, is introduced to more accurately account for the variation in the data. A locally adaptive bandwidth choice helps to determine the correct amount of smoothing at each time point. This preprocessing step to smooth the data then allows Subsequent analysis by methods Such as Spectral Analysis to be substantially improved in terms of their mean squared error
- …