1,410 research outputs found
Skellam shrinkage: Wavelet-based intensity estimation for inhomogeneous Poisson data
The ubiquity of integrating detectors in imaging and other applications
implies that a variety of real-world data are well modeled as Poisson random
variables whose means are in turn proportional to an underlying vector-valued
signal of interest. In this article, we first show how the so-called Skellam
distribution arises from the fact that Haar wavelet and filterbank transform
coefficients corresponding to measurements of this type are distributed as sums
and differences of Poisson counts. We then provide two main theorems on Skellam
shrinkage, one showing the near-optimality of shrinkage in the Bayesian setting
and the other providing for unbiased risk estimation in a frequentist context.
These results serve to yield new estimators in the Haar transform domain,
including an unbiased risk estimate for shrinkage of Haar-Fisz
variance-stabilized data, along with accompanying low-complexity algorithms for
inference. We conclude with a simulation study demonstrating the efficacy of
our Skellam shrinkage estimators both for the standard univariate wavelet test
functions as well as a variety of test images taken from the image processing
literature, confirming that they offer substantial performance improvements
over existing alternatives.Comment: 27 pages, 8 figures, slight formatting changes; submitted for
publicatio
Universal Denoising Networks : A Novel CNN Architecture for Image Denoising
We design a novel network architecture for learning discriminative image
models that are employed to efficiently tackle the problem of grayscale and
color image denoising. Based on the proposed architecture, we introduce two
different variants. The first network involves convolutional layers as a core
component, while the second one relies instead on non-local filtering layers
and thus it is able to exploit the inherent non-local self-similarity property
of natural images. As opposed to most of the existing deep network approaches,
which require the training of a specific model for each considered noise level,
the proposed models are able to handle a wide range of noise levels using a
single set of learned parameters, while they are very robust when the noise
degrading the latent image does not match the statistics of the noise used
during training. The latter argument is supported by results that we report on
publicly available images corrupted by unknown noise and which we compare
against solutions obtained by competing methods. At the same time the
introduced networks achieve excellent results under additive white Gaussian
noise (AWGN), which are comparable to those of the current state-of-the-art
network, while they depend on a more shallow architecture with the number of
trained parameters being one order of magnitude smaller. These properties make
the proposed networks ideal candidates to serve as sub-solvers on restoration
methods that deal with general inverse imaging problems such as deblurring,
demosaicking, superresolution, etc.Comment: Camera ready paper to appear in the Proceedings of CVPR 201
Non-parametric PSF estimation from celestial transit solar images using blind deconvolution
Context: Characterization of instrumental effects in astronomical imaging is
important in order to extract accurate physical information from the
observations. The measured image in a real optical instrument is usually
represented by the convolution of an ideal image with a Point Spread Function
(PSF). Additionally, the image acquisition process is also contaminated by
other sources of noise (read-out, photon-counting). The problem of estimating
both the PSF and a denoised image is called blind deconvolution and is
ill-posed.
Aims: We propose a blind deconvolution scheme that relies on image
regularization. Contrarily to most methods presented in the literature, our
method does not assume a parametric model of the PSF and can thus be applied to
any telescope.
Methods: Our scheme uses a wavelet analysis prior model on the image and weak
assumptions on the PSF. We use observations from a celestial transit, where the
occulting body can be assumed to be a black disk. These constraints allow us to
retain meaningful solutions for the filter and the image, eliminating trivial,
translated and interchanged solutions. Under an additive Gaussian noise
assumption, they also enforce noise canceling and avoid reconstruction
artifacts by promoting the whiteness of the residual between the blurred
observations and the cleaned data.
Results: Our method is applied to synthetic and experimental data. The PSF is
estimated for the SECCHI/EUVI instrument using the 2007 Lunar transit, and for
SDO/AIA using the 2012 Venus transit. Results show that the proposed
non-parametric blind deconvolution method is able to estimate the core of the
PSF with a similar quality to parametric methods proposed in the literature. We
also show that, if these parametric estimations are incorporated in the
acquisition model, the resulting PSF outperforms both the parametric and
non-parametric methods.Comment: 31 pages, 47 figure
Denoising time-resolved microscopy image sequences with singular value thresholding.
Time-resolved imaging in microscopy is important for the direct observation of a range of dynamic processes in both the physical and life sciences. However, the image sequences are often corrupted by noise, either as a result of high frame rates or a need to limit the radiation dose received by the sample. Here we exploit both spatial and temporal correlations using low-rank matrix recovery methods to denoise microscopy image sequences. We also make use of an unbiased risk estimator to address the issue of how much thresholding to apply in a robust and automated manner. The performance of the technique is demonstrated using simulated image sequences, as well as experimental scanning transmission electron microscopy data, where surface adatom motion and nanoparticle structural dynamics are recovered at rates of up to 32 frames per second.Junior Research Fellowship from Clare CollegeThis is the final version of the article. It first appeared from Elsevier via http://dx.doi.org/10.1016/j.ultramic.2016.05.00
Enhancing Patch-Based Methods with Inter-frame Connectivity for Denoising Multi-frame Images
The 3D block matching (BM3D) method is among the state-of-art methods for
denoising images corrupted with additive white Gaussian noise. With the help of
a novel inter-frame connectivity strategy, we propose an extension of the BM3D
method for the scenario where we have multiple images of the same scene. Our
proposed extension outperforms all the existing trivial and non-trivial
extensions of patch-based denoising methods for multi-frame images. We can
achieve a quality difference of as high as 28% over the next best method
without using any additional parameters. Our method can also be easily
generalised to other similar existing patch-based methods
The Effect of Radiometric Correction on Multicamera Algorithms
We present results confirming the importance of radiometric correction in multicamera applications. Although, we compensate for systematic noise only, we review all noise sources in the video sensor (systematic and random). We use a simple model for radiometric correction of digital images. The correction procedure is tested on the disparity map computation in stereo matching, particularly in a case where stereo usually fails -- almost textureless white surface. Without correcting radiometricly, the matching algorithm matches systematic noise components in the two images. With the correction, after removing the systematic noise, an improvement of 26% to 59% in relative rms of the disparity map is demonstrated (the higher the intensity of the flat field, the better the improvement)
Noise models for low counting rate coherent diffraction imaging
International audienceCoherent diffraction imaging (CDI) is a lens-less microscopy method that extracts the complex-valued exit field from intensity measurements alone. It is of particular importance for microscopy imaging with diffraction set-ups where high quality lenses are not available. The inversion scheme allowing the phase retrieval is based on the use of an iterative algorithm. In this work, we address the question of the choice of the iterative process in the case of data corrupted by photon or electron shot noise. Several noise models are presented and further used within two inversion strategies, the ordered subset and the scaled gradient. Based on analytical and numerical analysis together with Monte-Carlo studies, we show that any physical interpretations drawn from a CDI iterative technique require a detailed understanding of the relationship between the noise model and the used inversion method. We observe that iterative algorithms often assume implicitly a noise model. For low counting rates, each noise model behaves differently. Moreover, the used optimization strategy introduces its own artefacts. Based on this analysis, we develop a hybrid strategy which works efficiently in the absence of an informed initial guess. Our work emphasises issues which should be considered carefully when inverting experimental data
- …