16 research outputs found

    Enhancing Patch-Based Methods with Inter-frame Connectivity for Denoising Multi-frame Images

    Full text link
    The 3D block matching (BM3D) method is among the state-of-art methods for denoising images corrupted with additive white Gaussian noise. With the help of a novel inter-frame connectivity strategy, we propose an extension of the BM3D method for the scenario where we have multiple images of the same scene. Our proposed extension outperforms all the existing trivial and non-trivial extensions of patch-based denoising methods for multi-frame images. We can achieve a quality difference of as high as 28% over the next best method without using any additional parameters. Our method can also be easily generalised to other similar existing patch-based methods

    Poisson-Gaussian noise parameter estimation in fluorescence microscopy imaging

    Get PDF
    International audienceIn this paper, we present a new fully automatic approach for noise parameter estimation in the context of fluorescence imaging systems. In particular, we address the problem of Poisson-Gaussian noise modeling in the nonstationary case. In microscopy practice, the nonstationarity is due to the photobleaching effect. The proposed method consists of an adequate moment based initialization followed by Expectation-Maximization iterations. This approach is shown to provide reliable estimates of the mean and the variance of the Gaussian noise and of the scale parameter of Poisson noise, as well as of the photobleaching rates. The algorithm performance is demonstrated on both synthetic and real fluorescence microscopy image sequences

    Sensor noise measurement in the presence of a flickering illumination

    Get PDF
    International audienceRaw data from a digital imaging sensor are impaired by a heteroscedastic noise, the variance of pixel intensity linearly depending on the expected value. The most natural way of estimating the variance and the expected value at a given pixel is certainly empirical estimation from the variations along a stack of images of any static scene acquired at different times under the same camera setting. However, the relation found between the sample variance and the sample expectation is actually not linear, especially in the presence of a flickering illumination. The contribution of this paper is twofold. First, a theoretical model of this phenomenon shows that the linear relation changes into a quadratic one. Second, an algorithm is designed, which not only gives the parameters of the expected linear relation, but also the whole set of parameters governing an image formation, namely the gain, the offset and the readout noise. The rolling shutter effect is also considered

    An EM approach for Poisson-Gaussian noise modeling

    Get PDF
    International audienceThis paper deals with noise parameter estimation. We assume observations corrupted by noise modelled as a sum of two random processes: one Poisson and the other a (nonzero mean) Gaussian. Such problems arise in various applications, e.g. in astronomy and confocal microscopy imaging. To estimate noise parameters, we propose an iterative algorithm based on an Expectation-Maximization approach. This allows us to jointly estimate the scale parameter of the Poisson component and the mean and variance of the Gaussian one. Moreover, an adequate initialization based on cumulants is provided. Numerical difficulties arising from the procedure are also addressed. To validate the proposed method in terms of accuracy and robustness, tests are performed on synthetic data. The good performance of the method is also demonstrated in a denoising experiment on real data

    The SURE-LET approach to image denoising

    Get PDF
    Denoising is an essential step prior to any higher-level image-processing tasks such as segmentation or object tracking, because the undesirable corruption by noise is inherent to any physical acquisition device. When the measurements are performed by photosensors, one usually distinguish between two main regimes: in the first scenario, the measured intensities are sufficiently high and the noise is assumed to be signal-independent. In the second scenario, only few photons are detected, which leads to a strong signal-dependent degradation. When the noise is considered as signal-independent, it is often modeled as an additive independent (typically Gaussian) random variable, whereas, otherwise, the measurements are commonly assumed to follow independent Poisson laws, whose underlying intensities are the unknown noise-free measures. We first consider the reduction of additive white Gaussian noise (AWGN). Contrary to most existing denoising algorithms, our approach does not require an explicit prior statistical modeling of the unknown data. Our driving principle is the minimization of a purely data-adaptive unbiased estimate of the mean-squared error (MSE) between the processed and the noise-free data. In the AWGN case, such a MSE estimate was first proposed by Stein, and is known as "Stein's unbiased risk estimate" (SURE). We further develop the original SURE theory and propose a general methodology for fast and efficient multidimensional image denoising, which we call the SURE-LET approach. While SURE allows the quantitative monitoring of the denoising quality, the flexibility and the low computational complexity of our approach are ensured by a linear parameterization of the denoising process, expressed as a linear expansion of thresholds (LET).We propose several pointwise, multivariate, and multichannel thresholding functions applied to arbitrary (in particular, redundant) linear transformations of the input data, with a special focus on multiscale signal representations. We then transpose the SURE-LET approach to the estimation of Poisson intensities degraded by AWGN. The signal-dependent specificity of the Poisson statistics leads to the derivation of a new unbiased MSE estimate that we call "Poisson's unbiased risk estimate" (PURE) and requires more adaptive transform-domain thresholding rules. In a general PURE-LET framework, we first devise a fast interscale thresholding method restricted to the use of the (unnormalized) Haar wavelet transform. We then lift this restriction and show how the PURE-LET strategy can be used to design and optimize a wide class of nonlinear processing applied in an arbitrary (in particular, redundant) transform domain. We finally apply some of the proposed denoising algorithms to real multidimensional fluorescence microscopy images. Such in vivo imaging modality often operates under low-illumination conditions and short exposure time; consequently, the random fluctuations of the measured fluorophore radiations are well described by a Poisson process degraded (or not) by AWGN. We validate experimentally this statistical measurement model, and we assess the performance of the PURE-LET algorithms in comparison with some state-of-the-art denoising methods. Our solution turns out to be very competitive both qualitatively and computationally, allowing for a fast and efficient denoising of the huge volumes of data that are nowadays routinely produced in biomedical imaging

    A CANDLE for a deeper in-vivo insight

    Full text link
    A new Collaborative Approach for eNhanced Denoising under Low-light Excitation (CANDLE) is introduced for the processing of 3D laser scanning multiphoton microscopy images. CANDLE is designed to be robust for low signal-to-noise ratio (SNR) conditions typically encountered when imaging deep in scattering biological specimens. Based on an optimized non-local means filter involving the comparison of filtered patches, CANDLE locally adapts the amount of smoothing in order to deal with the noise inhomogeneity inherent to laser scanning fluorescence microscopy images. An extensive validation on synthetic data, images acquired on microspheres and in vivo images is presented. These experiments show that the CANDLE filter obtained competitive results compared to a state-of-the-art method and a locally adaptive optimized non-local means filter, especially under low SNR conditions (PSNR < 8 dB). Finally, the deeper imaging capabilities enabled by the proposed filter are demonstrated on deep tissue in vivo images of neurons and fine axonal processes in the Xenopus tadpole brain.We want to thank Florian Luisier for providing free plugin of his PureDenoise filter. We also want to thank Markku Makitalo for providing the code of their OVST. This study was supported by the Canadian Institutes of Health Research (CIHR, MOP-84360 to DLC and MOP-77567 to ESR) and Cda (CECR)-Gevas-OE016. MM holds a fellowship from the Deutscher Akademischer Austasch Dienst (DAAD) and a McGill Principal's Award. ESR is a tier 2 Canada Research Chair. This work has been partially supported by the Spanish Health Institute Carlos III through the RETICS Combiomed, RD07/0067/2001. This work benefited from the use of ImageJ.Coupé, P.; Munz, M.; Manjón Herrera, JV.; Ruthazer, ES.; Collins, DL. (2012). A CANDLE for a deeper in-vivo insight. Medical Image Analysis. 16(4):849-864. https://doi.org/10.1016/j.media.2012.01.002S84986416

    Image Denoising in Mixed Poisson-Gaussian Noise

    Get PDF
    We propose a general methodology (PURE-LET) to design and optimize a wide class of transform-domain thresholding algorithms for denoising images corrupted by mixed Poisson-Gaussian noise. We express the denoising process as a linear expansion of thresholds (LET) that we optimize by relying on a purely data-adaptive unbiased estimate of the mean-squared error (MSE), derived in a non-Bayesian framework (PURE: Poisson-Gaussian unbiased risk estimate). We provide a practical approximation of this theoretical MSE estimate for the tractable optimization of arbitrary transform-domain thresholding. We then propose a pointwise estimator for undecimated filterbank transforms, which consists of subband-adaptive thresholding functions with signal-dependent thresholds that are globally optimized in the image domain. We finally demonstrate the potential of the proposed approach through extensive comparisons with state-of-the-art techniques that are specifically tailored to the estimation of Poisson intensities. We also present denoising results obtained on real images of low-count fluorescence microscopy

    Sensor Noise Modeling by Stacking Pseudo-Periodic Grid Images Affected by Vibrations

    Get PDF
    International audienceThis letter addresses the problem of noise estimation in raw images from digital sensors. Assuming that a series of images of a static scene are available, a possibility is to characterize the noise at a given pixel by considering the random fluctuations of the gray level across the images. However, mechanical vibrations, even tiny ones, affect the experimental setup, making this approach ineffective. The contribution of this letter is twofold. It is shown that noise estimation in the presence of vibrations is actually biased. Focusing on images of a pseudo-periodic grid, an algorithm to discard their effect is also given. An application to the generalized Anscombe transform is discussed

    Non-parametric regression for patch-based fluorescence microscopy image sequence denoising

    Get PDF
    We present a non-parametric regression method for denoising 3D image sequences acquired in fluorescence microscopy. The proposed method exploits 3D+time information to improve the signal-to-noise ratio of images corrupted by mixed Poisson-Gaussian noise. A variance stabilization transform is first applied to the image-data to introduce independence between the mean and variance. This pre-processing requires the knowledge of parameters related to the acquisition system, also estimated in our approach. In a second step, we propose an original statistical patch-based framework for noise reduction and preservation of space-time discontinuities. In our study, discontinuities are related to small moving spots with high velocity observed in fluorescence video-microscopy. The idea is to minimize an objective nonlocal energy functional involving spatio-temporal image patches. The minimizer has a simple form and is defined as the weighted average of input data taken in spatially-varying neighborhoods. The size of each neighborhood is optimized to improve the performance of the pointwise estimator. The performance of the algorithm which requires no motion estimation, is then demonstrated on both synthetic and real image sequences using qualitative and quantitative criteria
    corecore