11,034 research outputs found

    Reduction of Quantum Noise in Transmittance Estimation Using PhotoneCorrelated Beams

    Get PDF
    The accuracy of optical measurements at low light levels is limited by the quantum noise of the source and by the random nature of the interaction with the measured object. The source noise may be reduced by use of nonclassical photon-number squeezed light. This paper considers the use of two photon-correlated beams (generated, for example, by spontaneous parametric downconversion) to measure the optical transmittance of an object. The photons of each beam obey a random Poisson process, but are synchronized in time. One beam is used to probe the object while the other is used as a reference providing information on the realization of the random arrival of photons at the object. The additional information available by such measurement may be exploited to improve the accuracy of the measurement. Various estimators, including the maximum likelihood estimator, are considered and their performance is evaluated and compared with the measurement based on single-beam conventional (Poisson) source and maximally squeezed (fixed photon number) source. The performance advantage established in this paper depends on parameters such as the intensity of the source, the transmittance of the object, the quantum efficiency of the detectors, the background noise, and the degree of correlation of the photon numbers in the two beams

    Skellam shrinkage: Wavelet-based intensity estimation for inhomogeneous Poisson data

    Full text link
    The ubiquity of integrating detectors in imaging and other applications implies that a variety of real-world data are well modeled as Poisson random variables whose means are in turn proportional to an underlying vector-valued signal of interest. In this article, we first show how the so-called Skellam distribution arises from the fact that Haar wavelet and filterbank transform coefficients corresponding to measurements of this type are distributed as sums and differences of Poisson counts. We then provide two main theorems on Skellam shrinkage, one showing the near-optimality of shrinkage in the Bayesian setting and the other providing for unbiased risk estimation in a frequentist context. These results serve to yield new estimators in the Haar transform domain, including an unbiased risk estimate for shrinkage of Haar-Fisz variance-stabilized data, along with accompanying low-complexity algorithms for inference. We conclude with a simulation study demonstrating the efficacy of our Skellam shrinkage estimators both for the standard univariate wavelet test functions as well as a variety of test images taken from the image processing literature, confirming that they offer substantial performance improvements over existing alternatives.Comment: 27 pages, 8 figures, slight formatting changes; submitted for publicatio

    Bregman Cost for Non-Gaussian Noise

    Get PDF
    One of the tasks of the Bayesian inverse problem is to find a good estimate based on the posterior probability density. The most common point estimators are the conditional mean (CM) and maximum a posteriori (MAP) estimates, which correspond to the mean and the mode of the posterior, respectively. From a theoretical point of view it has been argued that the MAP estimate is only in an asymptotic sense a Bayes estimator for the uniform cost function, while the CM estimate is a Bayes estimator for the means squared cost function. Recently, it has been proven that the MAP estimate is a proper Bayes estimator for the Bregman cost if the image is corrupted by Gaussian noise. In this work we extend this result to other noise models with log-concave likelihood density, by introducing two related Bregman cost functions for which the CM and the MAP estimates are proper Bayes estimators. Moreover, we also prove that the CM estimate outperforms the MAP estimate, when the error is measured in a certain Bregman distance, a result previously unknown also in the case of additive Gaussian noise
    • …
    corecore