139 research outputs found

    Poisson noise reduction with non-local PCA

    Full text link
    Photon-limited imaging arises when the number of photons collected by a sensor array is small relative to the number of detector elements. Photon limitations are an important concern for many applications such as spectral imaging, night vision, nuclear medicine, and astronomy. Typically a Poisson distribution is used to model these observations, and the inherent heteroscedasticity of the data combined with standard noise removal methods yields significant artifacts. This paper introduces a novel denoising algorithm for photon-limited images which combines elements of dictionary learning and sparse patch-based representations of images. The method employs both an adaptation of Principal Component Analysis (PCA) for Poisson noise and recently developed sparsity-regularized convex optimization algorithms for photon-limited images. A comprehensive empirical evaluation of the proposed method helps characterize the performance of this approach relative to other state-of-the-art denoising methods. The results reveal that, despite its conceptual simplicity, Poisson PCA-based denoising appears to be highly competitive in very low light regimes.Comment: erratum: Image man is wrongly name pepper in the journal versio

    Image denoising with multi-layer perceptrons, part 1: comparison with existing algorithms and with bounds

    Full text link
    Image denoising can be described as the problem of mapping from a noisy image to a noise-free image. The best currently available denoising methods approximate this mapping with cleverly engineered algorithms. In this work we attempt to learn this mapping directly with plain multi layer perceptrons (MLP) applied to image patches. We will show that by training on large image databases we are able to outperform the current state-of-the-art image denoising methods. In addition, our method achieves results that are superior to one type of theoretical bound and goes a large way toward closing the gap with a second type of theoretical bound. Our approach is easily adapted to less extensively studied types of noise, such as mixed Poisson-Gaussian noise, JPEG artifacts, salt-and-pepper noise and noise resembling stripes, for which we achieve excellent results as well. We will show that combining a block-matching procedure with MLPs can further improve the results on certain images. In a second paper, we detail the training trade-offs and the inner mechanisms of our MLPs

    Recursive Non-Local Means Filter for Video Denoising with Poisson-Gaussian Noise

    Get PDF
    In this paper, we describe a new recursive Non-Local means (RNLM) algorithm for video denoising that has been developed by the current authors. Furthermore, we extend this work by incorporating a Poisson-Gaussian noise model. Our new RNLM method provides a computationally efficient means for video denoising, and yields improved performance compared with the single frame NLM and BM3D benchmarks methods. Non-Local means (NLM) based methods of denoising have been applied successfully in various image and video sequence denoising applications. However, direct extension of this method from 2D to 3D for video processing can be computationally demanding. The RNLM approach takes advantage of recursion for computational savings, and spatio-temporal correlations for improved performance. In our approach, the first frame is processed with single frame NLM. Subsequent frames are estimated using a weighted combination of the current frame NLM, and the previous frame estimate. Block matching registration with the prior estimate is done for each current pixel estimate to maximize the temporal correlation. To address the Poisson-Gaussian noise model, we make use of the Anscombe transformation prior to filtering to stabilize the noise variance. Experimental results are presented that demonstrate the effectiveness of our proposed method. We show that the new method outperforms single frame NLM and BM3D

    Post-Reconstruction Deconvolution of PET Images by Total Generalized Variation Regularization

    Full text link
    Improving the quality of positron emission tomography (PET) images, affected by low resolution and high level of noise, is a challenging task in nuclear medicine and radiotherapy. This work proposes a restoration method, achieved after tomographic reconstruction of the images and targeting clinical situations where raw data are often not accessible. Based on inverse problem methods, our contribution introduces the recently developed total generalized variation (TGV) norm to regularize PET image deconvolution. Moreover, we stabilize this procedure with additional image constraints such as positivity and photometry invariance. A criterion for updating and adjusting automatically the regularization parameter in case of Poisson noise is also presented. Experiments are conducted on both synthetic data and real patient images.Comment: First published in the Proceedings of the 23rd European Signal Processing Conference (EUSIPCO-2015) in 2015, published by EURASI

    Skellam shrinkage: Wavelet-based intensity estimation for inhomogeneous Poisson data

    Full text link
    The ubiquity of integrating detectors in imaging and other applications implies that a variety of real-world data are well modeled as Poisson random variables whose means are in turn proportional to an underlying vector-valued signal of interest. In this article, we first show how the so-called Skellam distribution arises from the fact that Haar wavelet and filterbank transform coefficients corresponding to measurements of this type are distributed as sums and differences of Poisson counts. We then provide two main theorems on Skellam shrinkage, one showing the near-optimality of shrinkage in the Bayesian setting and the other providing for unbiased risk estimation in a frequentist context. These results serve to yield new estimators in the Haar transform domain, including an unbiased risk estimate for shrinkage of Haar-Fisz variance-stabilized data, along with accompanying low-complexity algorithms for inference. We conclude with a simulation study demonstrating the efficacy of our Skellam shrinkage estimators both for the standard univariate wavelet test functions as well as a variety of test images taken from the image processing literature, confirming that they offer substantial performance improvements over existing alternatives.Comment: 27 pages, 8 figures, slight formatting changes; submitted for publicatio
    corecore