9,437 research outputs found

    A Simple Approach to Maximum Intractable Likelihood Estimation

    Get PDF
    Approximate Bayesian Computation (ABC) can be viewed as an analytic approximation of an intractable likelihood coupled with an elementary simulation step. Such a view, combined with a suitable instrumental prior distribution permits maximum-likelihood (or maximum-a-posteriori) inference to be conducted, approximately, using essentially the same techniques. An elementary approach to this problem which simply obtains a nonparametric approximation of the likelihood surface which is then used as a smooth proxy for the likelihood in a subsequent maximisation step is developed here and the convergence of this class of algorithms is characterised theoretically. The use of non-sufficient summary statistics in this context is considered. Applying the proposed method to four problems demonstrates good performance. The proposed approach provides an alternative for approximating the maximum likelihood estimator (MLE) in complex scenarios

    Revisiting maximum-a-posteriori estimation in log-concave models

    Get PDF
    Maximum-a-posteriori (MAP) estimation is the main Bayesian estimation methodology in imaging sciences, where high dimensionality is often addressed by using Bayesian models that are log-concave and whose posterior mode can be computed efficiently by convex optimisation. Despite its success and wide adoption, MAP estimation is not theoretically well understood yet. The prevalent view in the community is that MAP estimation is not proper Bayesian estimation in a decision-theoretic sense because it does not minimise a meaningful expected loss function (unlike the minimum mean squared error (MMSE) estimator that minimises the mean squared loss). This paper addresses this theoretical gap by presenting a decision-theoretic derivation of MAP estimation in Bayesian models that are log-concave. A main novelty is that our analysis is based on differential geometry, and proceeds as follows. First, we use the underlying convex geometry of the Bayesian model to induce a Riemannian geometry on the parameter space. We then use differential geometry to identify the so-called natural or canonical loss function to perform Bayesian point estimation in that Riemannian manifold. For log-concave models, this canonical loss is the Bregman divergence associated with the negative log posterior density. We then show that the MAP estimator is the only Bayesian estimator that minimises the expected canonical loss, and that the posterior mean or MMSE estimator minimises the dual canonical loss. We also study the question of MAP and MSSE estimation performance in large scales and establish a universal bound on the expected canonical error as a function of dimension, offering new insights into the good performance observed in convex problems. These results provide a new understanding of MAP and MMSE estimation in log-concave settings, and of the multiple roles that convex geometry plays in imaging problems.Comment: Accepted for publication in SIAM Imaging Science

    Statistical unfolding of elementary particle spectra: Empirical Bayes estimation and bias-corrected uncertainty quantification

    Full text link
    We consider the high energy physics unfolding problem where the goal is to estimate the spectrum of elementary particles given observations distorted by the limited resolution of a particle detector. This important statistical inverse problem arising in data analysis at the Large Hadron Collider at CERN consists in estimating the intensity function of an indirectly observed Poisson point process. Unfolding typically proceeds in two steps: one first produces a regularized point estimate of the unknown intensity and then uses the variability of this estimator to form frequentist confidence intervals that quantify the uncertainty of the solution. In this paper, we propose forming the point estimate using empirical Bayes estimation which enables a data-driven choice of the regularization strength through marginal maximum likelihood estimation. Observing that neither Bayesian credible intervals nor standard bootstrap confidence intervals succeed in achieving good frequentist coverage in this problem due to the inherent bias of the regularized point estimate, we introduce an iteratively bias-corrected bootstrap technique for constructing improved confidence intervals. We show using simulations that this enables us to achieve nearly nominal frequentist coverage with only a modest increase in interval length. The proposed methodology is applied to unfolding the ZZ boson invariant mass spectrum as measured in the CMS experiment at the Large Hadron Collider.Comment: Published at http://dx.doi.org/10.1214/15-AOAS857 in the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org). arXiv admin note: substantial text overlap with arXiv:1401.827

    Uncertainty quantification for radio interferometric imaging: II. MAP estimation

    Get PDF
    Uncertainty quantification is a critical missing component in radio interferometric imaging that will only become increasingly important as the big-data era of radio interferometry emerges. Statistical sampling approaches to perform Bayesian inference, like Markov Chain Monte Carlo (MCMC) sampling, can in principle recover the full posterior distribution of the image, from which uncertainties can then be quantified. However, for massive data sizes, like those anticipated from the Square Kilometre Array (SKA), it will be difficult if not impossible to apply any MCMC technique due to its inherent computational cost. We formulate Bayesian inference problems with sparsity-promoting priors (motivated by compressive sensing), for which we recover maximum a posteriori (MAP) point estimators of radio interferometric images by convex optimisation. Exploiting recent developments in the theory of probability concentration, we quantify uncertainties by post-processing the recovered MAP estimate. Three strategies to quantify uncertainties are developed: (i) highest posterior density credible regions; (ii) local credible intervals (cf. error bars) for individual pixels and superpixels; and (iii) hypothesis testing of image structure. These forms of uncertainty quantification provide rich information for analysing radio interferometric observations in a statistically robust manner. Our MAP-based methods are approximately 10510^5 times faster computationally than state-of-the-art MCMC methods and, in addition, support highly distributed and parallelised algorithmic structures. For the first time, our MAP-based techniques provide a means of quantifying uncertainties for radio interferometric imaging for realistic data volumes and practical use, and scale to the emerging big-data era of radio astronomy.Comment: 13 pages, 10 figures, see companion article in this arXiv listin

    Robust approximate Bayesian inference

    Full text link
    We discuss an approach for deriving robust posterior distributions from MM-estimating functions using Approximate Bayesian Computation (ABC) methods. In particular, we use MM-estimating functions to construct suitable summary statistics in ABC algorithms. The theoretical properties of the robust posterior distributions are discussed. Special attention is given to the application of the method to linear mixed models. Simulation results and an application to a clinical study demonstrate the usefulness of the method. An R implementation is also provided in the robustBLME package.Comment: This is a revised and personal manuscript version of the article that has been accepted for publication by Journal of Statistical Planning and Inferenc

    Nonlocal Myriad Filters for Cauchy Noise Removal

    Full text link
    The contribution of this paper is two-fold. First, we introduce a generalized myriad filter, which is a method to compute the joint maximum likelihood estimator of the location and the scale parameter of the Cauchy distribution. Estimating only the location parameter is known as myriad filter. We propose an efficient algorithm to compute the generalized myriad filter and prove its convergence. Special cases of this algorithm result in the classical myriad filtering, respective an algorithm for estimating only the scale parameter. Based on an asymptotic analysis, we develop a second, even faster generalized myriad filtering technique. Second, we use our new approaches within a nonlocal, fully unsupervised method to denoise images corrupted by Cauchy noise. Special attention is paid to the determination of similar patches in noisy images. Numerical examples demonstrate the excellent performance of our algorithms which have moreover the advantage to be robust with respect to the parameter choice
    corecore