102 research outputs found
Poisson noise reduction with non-local PCA
Photon-limited imaging arises when the number of photons collected by a
sensor array is small relative to the number of detector elements. Photon
limitations are an important concern for many applications such as spectral
imaging, night vision, nuclear medicine, and astronomy. Typically a Poisson
distribution is used to model these observations, and the inherent
heteroscedasticity of the data combined with standard noise removal methods
yields significant artifacts. This paper introduces a novel denoising algorithm
for photon-limited images which combines elements of dictionary learning and
sparse patch-based representations of images. The method employs both an
adaptation of Principal Component Analysis (PCA) for Poisson noise and recently
developed sparsity-regularized convex optimization algorithms for
photon-limited images. A comprehensive empirical evaluation of the proposed
method helps characterize the performance of this approach relative to other
state-of-the-art denoising methods. The results reveal that, despite its
conceptual simplicity, Poisson PCA-based denoising appears to be highly
competitive in very low light regimes.Comment: erratum: Image man is wrongly name pepper in the journal versio
Image denoising with multi-layer perceptrons, part 1: comparison with existing algorithms and with bounds
Image denoising can be described as the problem of mapping from a noisy image
to a noise-free image. The best currently available denoising methods
approximate this mapping with cleverly engineered algorithms. In this work we
attempt to learn this mapping directly with plain multi layer perceptrons (MLP)
applied to image patches. We will show that by training on large image
databases we are able to outperform the current state-of-the-art image
denoising methods. In addition, our method achieves results that are superior
to one type of theoretical bound and goes a large way toward closing the gap
with a second type of theoretical bound. Our approach is easily adapted to less
extensively studied types of noise, such as mixed Poisson-Gaussian noise, JPEG
artifacts, salt-and-pepper noise and noise resembling stripes, for which we
achieve excellent results as well. We will show that combining a block-matching
procedure with MLPs can further improve the results on certain images. In a
second paper, we detail the training trade-offs and the inner mechanisms of our
MLPs
Sparsity Based Poisson Denoising with Dictionary Learning
The problem of Poisson denoising appears in various imaging applications,
such as low-light photography, medical imaging and microscopy. In cases of high
SNR, several transformations exist so as to convert the Poisson noise into an
additive i.i.d. Gaussian noise, for which many effective algorithms are
available. However, in a low SNR regime, these transformations are
significantly less accurate, and a strategy that relies directly on the true
noise statistics is required. A recent work by Salmon et al. took this route,
proposing a patch-based exponential image representation model based on GMM
(Gaussian mixture model), leading to state-of-the-art results. In this paper,
we propose to harness sparse-representation modeling to the image patches,
adopting the same exponential idea. Our scheme uses a greedy pursuit with
boot-strapping based stopping condition and dictionary learning within the
denoising process. The reconstruction performance of the proposed scheme is
competitive with leading methods in high SNR, and achieving state-of-the-art
results in cases of low SNR.Comment: 13 pages, 9 figure
Source detection using a 3D sparse representation: application to the Fermi gamma-ray space telescope
The multiscale variance stabilization Transform (MSVST) has recently been
proposed for Poisson data denoising. This procedure, which is nonparametric, is
based on thresholding wavelet coefficients. We present in this paper an
extension of the MSVST to 3D data (in fact 2D-1D data) when the third dimension
is not a spatial dimension, but the wavelength, the energy, or the time. We
show that the MSVST can be used for detecting and characterizing astrophysical
sources of high-energy gamma rays, using realistic simulated observations with
the Large Area Telescope (LAT). The LAT was launched in June 2008 on the Fermi
Gamma-ray Space Telescope mission. The MSVST algorithm is very fast relative to
traditional likelihood model fitting, and permits efficient detection across
the time dimension and immediate estimation of spectral properties.
Astrophysical sources of gamma rays, especially active galaxies, are typically
quite variable, and our current work may lead to a reliable method to quickly
characterize the flaring properties of newly-detected sources.Comment: Accepted. Full paper will figures available at
http://jstarck.free.fr/aa08_msvst.pd
- …