5,353 research outputs found

    Bayesian multiscale deconvolution applied to gamma-ray spectroscopy

    Get PDF
    A common task in gamma-ray astronomy is to extract spectral information, such as model constraints and incident photon spectrum estimates, given the measured energy deposited in a detector and the detector response. This is the classic problem of spectral “deconvolution” or spectral inversion. The methods of forward folding (i.e., parameter fitting) and maximum entropy “deconvolution” (i.e., estimating independent input photon rates for each individual energy bin) have been used successfully for gamma-ray solar flares (e.g., Rank, 1997; Share and Murphy, 1995). These methods have worked well under certain conditions but there are situations were they don’t apply. These are: 1) when no reasonable model (e.g., fewer parameters than data bins) is yet known, for forward folding; 2) when one expects a mixture of broad and narrow features (e.g., solar flares), for the maximum entropy method; and 3) low count rates and low signal-to-noise, for both. Low count rates are a problem because these methods (as they have been implemented) assume Gaussian statistics but Poisson are applicable. Background subtraction techniques often lead to negative count rates. For Poisson data the Maximum Likelihood Estimator (MLE) with a Poisson likelihood is appropriate. Without a regularization term, trying to estimate the “true” individual input photon rates per bin can be an ill-posed problem, even without including both broad and narrow features in the spectrum (i.e., amultiscale approach). One way to implement this regularization is through the use of a suitable Bayesian prior. Nowak and Kolaczyk (1999) have developed a fast, robust, technique using a Bayesian multiscale framework that addresses these problems with added algorithmic advantages. We outline this new approach and demonstrate its use with time resolved solar flare gamma-ray spectroscopy

    Skellam shrinkage: Wavelet-based intensity estimation for inhomogeneous Poisson data

    Full text link
    The ubiquity of integrating detectors in imaging and other applications implies that a variety of real-world data are well modeled as Poisson random variables whose means are in turn proportional to an underlying vector-valued signal of interest. In this article, we first show how the so-called Skellam distribution arises from the fact that Haar wavelet and filterbank transform coefficients corresponding to measurements of this type are distributed as sums and differences of Poisson counts. We then provide two main theorems on Skellam shrinkage, one showing the near-optimality of shrinkage in the Bayesian setting and the other providing for unbiased risk estimation in a frequentist context. These results serve to yield new estimators in the Haar transform domain, including an unbiased risk estimate for shrinkage of Haar-Fisz variance-stabilized data, along with accompanying low-complexity algorithms for inference. We conclude with a simulation study demonstrating the efficacy of our Skellam shrinkage estimators both for the standard univariate wavelet test functions as well as a variety of test images taken from the image processing literature, confirming that they offer substantial performance improvements over existing alternatives.Comment: 27 pages, 8 figures, slight formatting changes; submitted for publicatio

    Poisson inverse problems

    Get PDF
    In this paper we focus on nonparametric estimators in inverse problems for Poisson processes involving the use of wavelet decompositions. Adopting an adaptive wavelet Galerkin discretization, we find that our method combines the well-known theoretical advantages of wavelet--vaguelette decompositions for inverse problems in terms of optimally adapting to the unknown smoothness of the solution, together with the remarkably simple closed-form expressions of Galerkin inversion methods. Adapting the results of Barron and Sheu [Ann. Statist. 19 (1991) 1347--1369] to the context of log-intensity functions approximated by wavelet series with the use of the Kullback--Leibler distance between two point processes, we also present an asymptotic analysis of convergence rates that justifies our approach. In order to shed some light on the theoretical results obtained and to examine the accuracy of our estimates in finite samples, we illustrate our method by the analysis of some simulated examples.Comment: Published at http://dx.doi.org/10.1214/009053606000000687 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    A proximal iteration for deconvolving Poisson noisy images using sparse representations

    Get PDF
    We propose an image deconvolution algorithm when the data is contaminated by Poisson noise. The image to restore is assumed to be sparsely represented in a dictionary of waveforms such as the wavelet or curvelet transforms. Our key contributions are: First, we handle the Poisson noise properly by using the Anscombe variance stabilizing transform leading to a {\it non-linear} degradation equation with additive Gaussian noise. Second, the deconvolution problem is formulated as the minimization of a convex functional with a data-fidelity term reflecting the noise properties, and a non-smooth sparsity-promoting penalties over the image representation coefficients (e.g. ℓ1\ell_1-norm). Third, a fast iterative backward-forward splitting algorithm is proposed to solve the minimization problem. We derive existence and uniqueness conditions of the solution, and establish convergence of the iterative algorithm. Finally, a GCV-based model selection procedure is proposed to objectively select the regularization parameter. Experimental results are carried out to show the striking benefits gained from taking into account the Poisson statistics of the noise. These results also suggest that using sparse-domain regularization may be tractable in many deconvolution applications with Poisson noise such as astronomy and microscopy

    Poisson noise reduction with non-local PCA

    Full text link
    Photon-limited imaging arises when the number of photons collected by a sensor array is small relative to the number of detector elements. Photon limitations are an important concern for many applications such as spectral imaging, night vision, nuclear medicine, and astronomy. Typically a Poisson distribution is used to model these observations, and the inherent heteroscedasticity of the data combined with standard noise removal methods yields significant artifacts. This paper introduces a novel denoising algorithm for photon-limited images which combines elements of dictionary learning and sparse patch-based representations of images. The method employs both an adaptation of Principal Component Analysis (PCA) for Poisson noise and recently developed sparsity-regularized convex optimization algorithms for photon-limited images. A comprehensive empirical evaluation of the proposed method helps characterize the performance of this approach relative to other state-of-the-art denoising methods. The results reveal that, despite its conceptual simplicity, Poisson PCA-based denoising appears to be highly competitive in very low light regimes.Comment: erratum: Image man is wrongly name pepper in the journal versio

    Compressed sensing performance bounds under Poisson noise

    Full text link
    This paper describes performance bounds for compressed sensing (CS) where the underlying sparse or compressible (sparsely approximable) signal is a vector of nonnegative intensities whose measurements are corrupted by Poisson noise. In this setting, standard CS techniques cannot be applied directly for several reasons. First, the usual signal-independent and/or bounded noise models do not apply to Poisson noise, which is non-additive and signal-dependent. Second, the CS matrices typically considered are not feasible in real optical systems because they do not adhere to important constraints, such as nonnegativity and photon flux preservation. Third, the typical ℓ2\ell_2--ℓ1\ell_1 minimization leads to overfitting in the high-intensity regions and oversmoothing in the low-intensity areas. In this paper, we describe how a feasible positivity- and flux-preserving sensing matrix can be constructed, and then analyze the performance of a CS reconstruction approach for Poisson data that minimizes an objective function consisting of a negative Poisson log likelihood term and a penalty term which measures signal sparsity. We show that, as the overall intensity of the underlying signal increases, an upper bound on the reconstruction error decays at an appropriate rate (depending on the compressibility of the signal), but that for a fixed signal intensity, the signal-dependent part of the error bound actually grows with the number of measurements or sensors. This surprising fact is both proved theoretically and justified based on physical intuition.Comment: 12 pages, 3 pdf figures; accepted for publication in IEEE Transactions on Signal Processin
    • 

    corecore