3,403 research outputs found

    Recursive Non-Local Means Filter for Video Denoising with Poisson-Gaussian Noise

    Get PDF
    In this paper, we describe a new recursive Non-Local means (RNLM) algorithm for video denoising that has been developed by the current authors. Furthermore, we extend this work by incorporating a Poisson-Gaussian noise model. Our new RNLM method provides a computationally efficient means for video denoising, and yields improved performance compared with the single frame NLM and BM3D benchmarks methods. Non-Local means (NLM) based methods of denoising have been applied successfully in various image and video sequence denoising applications. However, direct extension of this method from 2D to 3D for video processing can be computationally demanding. The RNLM approach takes advantage of recursion for computational savings, and spatio-temporal correlations for improved performance. In our approach, the first frame is processed with single frame NLM. Subsequent frames are estimated using a weighted combination of the current frame NLM, and the previous frame estimate. Block matching registration with the prior estimate is done for each current pixel estimate to maximize the temporal correlation. To address the Poisson-Gaussian noise model, we make use of the Anscombe transformation prior to filtering to stabilize the noise variance. Experimental results are presented that demonstrate the effectiveness of our proposed method. We show that the new method outperforms single frame NLM and BM3D

    Poisson noise reduction with non-local PCA

    Full text link
    Photon-limited imaging arises when the number of photons collected by a sensor array is small relative to the number of detector elements. Photon limitations are an important concern for many applications such as spectral imaging, night vision, nuclear medicine, and astronomy. Typically a Poisson distribution is used to model these observations, and the inherent heteroscedasticity of the data combined with standard noise removal methods yields significant artifacts. This paper introduces a novel denoising algorithm for photon-limited images which combines elements of dictionary learning and sparse patch-based representations of images. The method employs both an adaptation of Principal Component Analysis (PCA) for Poisson noise and recently developed sparsity-regularized convex optimization algorithms for photon-limited images. A comprehensive empirical evaluation of the proposed method helps characterize the performance of this approach relative to other state-of-the-art denoising methods. The results reveal that, despite its conceptual simplicity, Poisson PCA-based denoising appears to be highly competitive in very low light regimes.Comment: erratum: Image man is wrongly name pepper in the journal versio

    Sparsity Based Poisson Denoising with Dictionary Learning

    Full text link
    The problem of Poisson denoising appears in various imaging applications, such as low-light photography, medical imaging and microscopy. In cases of high SNR, several transformations exist so as to convert the Poisson noise into an additive i.i.d. Gaussian noise, for which many effective algorithms are available. However, in a low SNR regime, these transformations are significantly less accurate, and a strategy that relies directly on the true noise statistics is required. A recent work by Salmon et al. took this route, proposing a patch-based exponential image representation model based on GMM (Gaussian mixture model), leading to state-of-the-art results. In this paper, we propose to harness sparse-representation modeling to the image patches, adopting the same exponential idea. Our scheme uses a greedy pursuit with boot-strapping based stopping condition and dictionary learning within the denoising process. The reconstruction performance of the proposed scheme is competitive with leading methods in high SNR, and achieving state-of-the-art results in cases of low SNR.Comment: 13 pages, 9 figure

    Medical image denoising using convolutional denoising autoencoders

    Full text link
    Image denoising is an important pre-processing step in medical image analysis. Different algorithms have been proposed in past three decades with varying denoising performances. More recently, having outperformed all conventional methods, deep learning based models have shown a great promise. These methods are however limited for requirement of large training sample size and high computational costs. In this paper we show that using small sample size, denoising autoencoders constructed using convolutional layers can be used for efficient denoising of medical images. Heterogeneous images can be combined to boost sample size for increased denoising performance. Simplest of networks can reconstruct images with corruption levels so high that noise and signal are not differentiable to human eye.Comment: To appear: 6 pages, paper to be published at the Fourth Workshop on Data Mining in Biomedical Informatics and Healthcare at ICDM, 201

    Optimally Stabilized PET Image Denoising Using Trilateral Filtering

    Full text link
    Low-resolution and signal-dependent noise distribution in positron emission tomography (PET) images makes denoising process an inevitable step prior to qualitative and quantitative image analysis tasks. Conventional PET denoising methods either over-smooth small-sized structures due to resolution limitation or make incorrect assumptions about the noise characteristics. Therefore, clinically important quantitative information may be corrupted. To address these challenges, we introduced a novel approach to remove signal-dependent noise in the PET images where the noise distribution was considered as Poisson-Gaussian mixed. Meanwhile, the generalized Anscombe's transformation (GAT) was used to stabilize varying nature of the PET noise. Other than noise stabilization, it is also desirable for the noise removal filter to preserve the boundaries of the structures while smoothing the noisy regions. Indeed, it is important to avoid significant loss of quantitative information such as standard uptake value (SUV)-based metrics as well as metabolic lesion volume. To satisfy all these properties, we extended bilateral filtering method into trilateral filtering through multiscaling and optimal Gaussianization process. The proposed method was tested on more than 50 PET-CT images from various patients having different cancers and achieved the superior performance compared to the widely used denoising techniques in the literature.Comment: 8 pages, 3 figures; to appear in the Lecture Notes in Computer Science (MICCAI 2014
    • …
    corecore