429 research outputs found

    From retrodiction to Bayesian quantum imaging

    Get PDF
    We employ quantum retrodiction to develop a robust Bayesian algorithm for reconstructing the intensity values of an image from sparse photocount data, while also accounting for detector noise in the form of dark counts. This method yields not only a reconstructed image but also provides the full probability distribution function for the intensity at each pixel. We use simulated as well as real data to illustrate both the applications of the algorithm and the analysis options that are only available when the full probability distribution functions are known. These include calculating Bayesian credible regions for each pixel intensity, allowing an objective assessment of the reliability of the reconstructed image intensity values

    Skellam shrinkage: Wavelet-based intensity estimation for inhomogeneous Poisson data

    Full text link
    The ubiquity of integrating detectors in imaging and other applications implies that a variety of real-world data are well modeled as Poisson random variables whose means are in turn proportional to an underlying vector-valued signal of interest. In this article, we first show how the so-called Skellam distribution arises from the fact that Haar wavelet and filterbank transform coefficients corresponding to measurements of this type are distributed as sums and differences of Poisson counts. We then provide two main theorems on Skellam shrinkage, one showing the near-optimality of shrinkage in the Bayesian setting and the other providing for unbiased risk estimation in a frequentist context. These results serve to yield new estimators in the Haar transform domain, including an unbiased risk estimate for shrinkage of Haar-Fisz variance-stabilized data, along with accompanying low-complexity algorithms for inference. We conclude with a simulation study demonstrating the efficacy of our Skellam shrinkage estimators both for the standard univariate wavelet test functions as well as a variety of test images taken from the image processing literature, confirming that they offer substantial performance improvements over existing alternatives.Comment: 27 pages, 8 figures, slight formatting changes; submitted for publicatio

    Local Retrodiction Models for Photon-Noise-Limited Images

    Get PDF
    Imaging technologies working at very low light levels acquire data by attempting to count the number of photons impinging on each pixel. Especially in cases with, on average, less than one photocount per pixel the resulting images are heavily corrupted by Poissonian noise and a host of successful algorithms trying to reconstruct the original image from this noisy data have been developed. Here we review a recently proposed scheme that complements these algorithms by calculating the full probability distribution for the local intensity distribution behind the noisy photocount measurements. Such a probabilistic treatment opens the way to hypothesis testing and confidence levels for conclusions drawn from image analysis

    Learning sparse representations of depth

    Full text link
    This paper introduces a new method for learning and inferring sparse representations of depth (disparity) maps. The proposed algorithm relaxes the usual assumption of the stationary noise model in sparse coding. This enables learning from data corrupted with spatially varying noise or uncertainty, typically obtained by laser range scanners or structured light depth cameras. Sparse representations are learned from the Middlebury database disparity maps and then exploited in a two-layer graphical model for inferring depth from stereo, by including a sparsity prior on the learned features. Since they capture higher-order dependencies in the depth structure, these priors can complement smoothness priors commonly used in depth inference based on Markov Random Field (MRF) models. Inference on the proposed graph is achieved using an alternating iterative optimization technique, where the first layer is solved using an existing MRF-based stereo matching algorithm, then held fixed as the second layer is solved using the proposed non-stationary sparse coding algorithm. This leads to a general method for improving solutions of state of the art MRF-based depth estimation algorithms. Our experimental results first show that depth inference using learned representations leads to state of the art denoising of depth maps obtained from laser range scanners and a time of flight camera. Furthermore, we show that adding sparse priors improves the results of two depth estimation methods: the classical graph cut algorithm by Boykov et al. and the more recent algorithm of Woodford et al.Comment: 12 page

    Poisson Denoising on the Sphere

    Get PDF
    International audienceIn the scope of the Fermi mission, Poisson noise removal should improve data quality and make source detection easier. This paper presents a method for Poisson data denoising on sphere, called Multi-Scale Variance Stabilizing Transform on Sphere (MS-VSTS). This method is based on a Variance Stabilizing Transform (VST), a transform which aims to stabilize a Poisson data set such that each stabilized sample has an (asymptotically) constant variance. In addition, for the VST used in the method, the transformed data are asymptotically Gaussian. Thus, MS-VSTS consists in decomposing the data into a sparse multi-scale dictionary (wavelets, curvelets, ridgelets...), and then applying a VST on the coefficients in order to get quasi-Gaussian stabilized coefficients. In this present article, the used multi-scale transform is the Isotropic Undecimated Wavelet Transform. Then, hypothesis tests are made to detect significant coefficients, and the denoised image is reconstructed with an iterative method based on Hybrid Steepest Descent (HST). The method is tested on simulated Fermi data

    Restoration of Poissonian Images Using Alternating Direction Optimization

    Full text link
    Much research has been devoted to the problem of restoring Poissonian images, namely for medical and astronomical applications. However, the restoration of these images using state-of-the-art regularizers (such as those based on multiscale representations or total variation) is still an active research area, since the associated optimization problems are quite challenging. In this paper, we propose an approach to deconvolving Poissonian images, which is based on an alternating direction optimization method. The standard regularization (or maximum a posteriori) restoration criterion, which combines the Poisson log-likelihood with a (non-smooth) convex regularizer (log-prior), leads to hard optimization problems: the log-likelihood is non-quadratic and non-separable, the regularizer is non-smooth, and there is a non-negativity constraint. Using standard convex analysis tools, we present sufficient conditions for existence and uniqueness of solutions of these optimization problems, for several types of regularizers: total-variation, frame-based analysis, and frame-based synthesis. We attack these problems with an instance of the alternating direction method of multipliers (ADMM), which belongs to the family of augmented Lagrangian algorithms. We study sufficient conditions for convergence and show that these are satisfied, either under total-variation or frame-based (analysis and synthesis) regularization. The resulting algorithms are shown to outperform alternative state-of-the-art methods, both in terms of speed and restoration accuracy.Comment: 12 pages, 12 figures, 2 tables. Submitted to the IEEE Transactions on Image Processin

    Undecimated haar thresholding for poisson intensity estimation

    Full text link
    • …
    corecore