7,439 research outputs found

    An EM Approach for Time-Variant Poisson-Gaussian Model Parameter Estimation

    Get PDF
    International audienceThe problem of estimating the parameters of a Poisson-Gaussian model from experimental data has recently raised much interest in various applications, for instance in confocal fluorescence microscopy. In this context, a field of independent random variables is observed, which is varying both in time and space. Each variable is a sum of two components, one following a Poisson and the other a Gaussian distribution. In this paper, a general formulation is considered where the associated Poisson process is nonstationary in space and also exhibits an exponential decay in time, whereas the Gaussian component corresponds to a stationary white noise with arbitrary mean. To solve the considered parametric estimation problem, we follow an iterative Expectation-Maximization (EM) approach. The parameter update equations involve deriving finite approximation of infinite sums. Expressions for the maximum error incurred in the process are also given. Since the problem is non-convex, we pay attention to the EM initialization, using a moment-based method where recent optimization tools come into play. We carry out a performance analysis by computing the Cramer-Rao bounds on the estimated variables. The practical performance of the proposed estimation procedure is illustrated on both synthetic data and real fluorescence macroscopy image sequences. The algorithm is shown to provide reliable estimates of the mean/variance of the Gaussian noise and of the scale parameter of the Poisson component, as well as of its exponential decay rate. In particular, the mean estimate of the Poisson component can be interpreted as a good-quality denoised version of the data

    A Scalable MCEM Estimator for Spatio-Temporal Autoregressive Models

    Full text link
    Very large spatio-temporal lattice data are becoming increasingly common across a variety of disciplines. However, estimating interdependence across space and time in large areal datasets remains challenging, as existing approaches are often (i) not scalable, (ii) designed for conditionally Gaussian outcome data, or (iii) are limited to cross-sectional and univariate outcomes. This paper proposes an MCEM estimation strategy for a family of latent-Gaussian multivariate spatio-temporal models that addresses these issues. The proposed estimator is applicable to a wide range of non-Gaussian outcomes, and implementations for binary and count outcomes are discussed explicitly. The methodology is illustrated on simulated data, as well as on weekly data of IS-related events in Syrian districts.Comment: 29 pages, 8 figure

    Statistical unfolding of elementary particle spectra: Empirical Bayes estimation and bias-corrected uncertainty quantification

    Full text link
    We consider the high energy physics unfolding problem where the goal is to estimate the spectrum of elementary particles given observations distorted by the limited resolution of a particle detector. This important statistical inverse problem arising in data analysis at the Large Hadron Collider at CERN consists in estimating the intensity function of an indirectly observed Poisson point process. Unfolding typically proceeds in two steps: one first produces a regularized point estimate of the unknown intensity and then uses the variability of this estimator to form frequentist confidence intervals that quantify the uncertainty of the solution. In this paper, we propose forming the point estimate using empirical Bayes estimation which enables a data-driven choice of the regularization strength through marginal maximum likelihood estimation. Observing that neither Bayesian credible intervals nor standard bootstrap confidence intervals succeed in achieving good frequentist coverage in this problem due to the inherent bias of the regularized point estimate, we introduce an iteratively bias-corrected bootstrap technique for constructing improved confidence intervals. We show using simulations that this enables us to achieve nearly nominal frequentist coverage with only a modest increase in interval length. The proposed methodology is applied to unfolding the ZZ boson invariant mass spectrum as measured in the CMS experiment at the Large Hadron Collider.Comment: Published at http://dx.doi.org/10.1214/15-AOAS857 in the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org). arXiv admin note: substantial text overlap with arXiv:1401.827

    Recent Progress in Image Deblurring

    Full text link
    This paper comprehensively reviews the recent development of image deblurring, including non-blind/blind, spatially invariant/variant deblurring techniques. Indeed, these techniques share the same objective of inferring a latent sharp image from one or several corresponding blurry images, while the blind deblurring techniques are also required to derive an accurate blur kernel. Considering the critical role of image restoration in modern imaging systems to provide high-quality images under complex environments such as motion, undesirable lighting conditions, and imperfect system components, image deblurring has attracted growing attention in recent years. From the viewpoint of how to handle the ill-posedness which is a crucial issue in deblurring tasks, existing methods can be grouped into five categories: Bayesian inference framework, variational methods, sparse representation-based methods, homography-based modeling, and region-based methods. In spite of achieving a certain level of development, image deblurring, especially the blind case, is limited in its success by complex application conditions which make the blur kernel hard to obtain and be spatially variant. We provide a holistic understanding and deep insight into image deblurring in this review. An analysis of the empirical evidence for representative methods, practical issues, as well as a discussion of promising future directions are also presented.Comment: 53 pages, 17 figure

    Efficient training algorithms for HMMs using incremental estimation

    Get PDF
    Typically, parameter estimation for a hidden Markov model (HMM) is performed using an expectation-maximization (EM) algorithm with the maximum-likelihood (ML) criterion. The EM algorithm is an iterative scheme that is well-defined and numerically stable, but convergence may require a large number of iterations. For speech recognition systems utilizing large amounts of training material, this results in long training times. This paper presents an incremental estimation approach to speed-up the training of HMMs without any loss of recognition performance. The algorithm selects a subset of data from the training set, updates the model parameters based on the subset, and then iterates the process until convergence of the parameters. The advantage of this approach is a substantial increase in the number of iterations of the EM algorithm per training token, which leads to faster training. In order to achieve reliable estimation from a small fraction of the complete data set at each iteration, two training criteria are studied; ML and maximum a posteriori (MAP) estimation. Experimental results show that the training of the incremental algorithms is substantially faster than the conventional (batch) method and suffers no loss of recognition performance. Furthermore, the incremental MAP based training algorithm improves performance over the batch versio
    • …
    corecore