11 research outputs found

    Revisiting maximum-a-posteriori estimation in log-concave models

    Get PDF
    Maximum-a-posteriori (MAP) estimation is the main Bayesian estimation methodology in imaging sciences, where high dimensionality is often addressed by using Bayesian models that are log-concave and whose posterior mode can be computed efficiently by convex optimisation. Despite its success and wide adoption, MAP estimation is not theoretically well understood yet. The prevalent view in the community is that MAP estimation is not proper Bayesian estimation in a decision-theoretic sense because it does not minimise a meaningful expected loss function (unlike the minimum mean squared error (MMSE) estimator that minimises the mean squared loss). This paper addresses this theoretical gap by presenting a decision-theoretic derivation of MAP estimation in Bayesian models that are log-concave. A main novelty is that our analysis is based on differential geometry, and proceeds as follows. First, we use the underlying convex geometry of the Bayesian model to induce a Riemannian geometry on the parameter space. We then use differential geometry to identify the so-called natural or canonical loss function to perform Bayesian point estimation in that Riemannian manifold. For log-concave models, this canonical loss is the Bregman divergence associated with the negative log posterior density. We then show that the MAP estimator is the only Bayesian estimator that minimises the expected canonical loss, and that the posterior mean or MMSE estimator minimises the dual canonical loss. We also study the question of MAP and MSSE estimation performance in large scales and establish a universal bound on the expected canonical error as a function of dimension, offering new insights into the good performance observed in convex problems. These results provide a new understanding of MAP and MMSE estimation in log-concave settings, and of the multiple roles that convex geometry plays in imaging problems.Comment: Accepted for publication in SIAM Imaging Science

    Quantifying Uncertainty in High Dimensional Inverse Problems by Convex Optimisation

    Full text link
    Inverse problems play a key role in modern image/signal processing methods. However, since they are generally ill-conditioned or ill-posed due to lack of observations, their solutions may have significant intrinsic uncertainty. Analysing and quantifying this uncertainty is very challenging, particularly in high-dimensional problems and problems with non-smooth objective functionals (e.g. sparsity-promoting priors). In this article, a series of strategies to visualise this uncertainty are presented, e.g. highest posterior density credible regions, and local credible intervals (cf. error bars) for individual pixels and superpixels. Our methods support non-smooth priors for inverse problems and can be scaled to high-dimensional settings. Moreover, we present strategies to automatically set regularisation parameters so that the proposed uncertainty quantification (UQ) strategies become much easier to use. Also, different kinds of dictionaries (complete and over-complete) are used to represent the image/signal and their performance in the proposed UQ methodology is investigated.Comment: 5 pages, 5 figure

    Makeham Mortality Models as Mixtures

    Full text link
    Mortality modeling is crucial to understanding the complex nature of population aging and projecting future trends. The Makeham term is a commonly used constant additive hazard in mortality modeling to capture background mortality unrelated to aging. In this manuscript, we propose representing Makeham mortality models as mixtures that describe lifetimes in a competing-risk framework: an individual dies either according to a baseline mortality mechanism or an exponential distribution, whatever strikes first. The baseline can describe mortality at all ages or just mortality due to aging. By using this approach, we can estimate the share of non-senescent mortality at each adult age, which is an essential contribution to the study of premature and senescent mortality. Our results allow for a better understanding of the underlying mechanisms of mortality and provide a more accurate picture of mortality dynamics in populations

    Using a Penalized Likelihood to Detect Mortality Deceleration

    Full text link
    In this paper, we suggest a novel method for detecting mortality deceleration. We focus on the gamma-Gompertz frailty model and suggest the subtraction of a penalty in the log-likelihood function as an alternative to traditional likelihood inference and hypothesis testing. Over existing methods, our method offers advantages, such as avoiding the use of a p-value, hypothesis testing, and asymptotic distributions. We evaluate the performance of our approach by comparing it with traditional likelihood inference on both simulated and real mortality data. Results have shown that our approach is more accurate in detecting mortality deceleration and provides more reliable estimates of the underlying parameters. The proposed method is a significant contribution to the literature as it offers a powerful tool for analyzing mortality patterns

    Accelerated Bayesian imaging by relaxed proximal-point Langevin sampling

    Full text link
    This paper presents a new accelerated proximal Markov chain Monte Carlo methodology to perform Bayesian inference in imaging inverse problems with an underlying convex geometry. The proposed strategy takes the form of a stochastic relaxed proximal-point iteration that admits two complementary interpretations. For models that are smooth or regularised by Moreau-Yosida smoothing, the algorithm is equivalent to an implicit midpoint discretisation of an overdamped Langevin diffusion targeting the posterior distribution of interest. This discretisation is asymptotically unbiased for Gaussian targets and shown to converge in an accelerated manner for any target that is κ\kappa-strongly log-concave (i.e., requiring in the order of κ\sqrt{\kappa} iterations to converge, similarly to accelerated optimisation schemes), comparing favorably to [M. Pereyra, L. Vargas Mieles, K.C. Zygalakis, SIAM J. Imaging Sciences, 13,2 (2020), pp. 905-935] which is only provably accelerated for Gaussian targets and has bias. For models that are not smooth, the algorithm is equivalent to a Leimkuhler-Matthews discretisation of a Langevin diffusion targeting a Moreau-Yosida approximation of the posterior distribution of interest, and hence achieves a significantly lower bias than conventional unadjusted Langevin strategies based on the Euler-Maruyama discretisation. For targets that are κ\kappa-strongly log-concave, the provided non-asymptotic convergence analysis also identifies the optimal time step which maximizes the convergence speed. The proposed methodology is demonstrated through a range of experiments related to image deconvolution with Gaussian and Poisson noise, with assumption-driven and data-driven convex priors. Source codes for the numerical experiments of this paper are available from https://github.com/MI2G/accelerated-langevin-imla.Comment: 34 pages, 13 figure

    Learned reconstruction methods with convergence guarantees

    Get PDF
    In recent years, deep learning has achieved remarkable empirical success for image reconstruction. This has catalyzed an ongoing quest for precise characterization of correctness and reliability of data-driven methods in critical use-cases, for instance in medical imaging. Notwithstanding the excellent performance and efficacy of deep learning-based methods, concerns have been raised regarding their stability, or lack thereof, with serious practical implications. Significant advances have been made in recent years to unravel the inner workings of data-driven image recovery methods, challenging their widely perceived black-box nature. In this article, we will specify relevant notions of convergence for data-driven image reconstruction, which will form the basis of a survey of learned methods with mathematically rigorous reconstruction guarantees. An example that is highlighted is the role of ICNN, offering the possibility to combine the power of deep learning with classical convex regularization theory for devising methods that are provably convergent. This survey article is aimed at both methodological researchers seeking to advance the frontiers of our understanding of data-driven image reconstruction methods as well as practitioners, by providing an accessible description of useful convergence concepts and by placing some of the existing empirical practices on a solid mathematical foundation

    Revisiting Maximum-A-Posteriori Estimation in Log-Concave Models

    No full text

    Accelerating Bayesian computation in imaging

    Get PDF
    The dimensionality and ill-posedness often encountered in imaging inverse problems are a challenge for Bayesian computational methods, particularly for state-of-the-art sampling alternatives based on the Euler-Maruyama discretisation of the Langevin diffusion process. In this thesis, we address this difficulty and propose alternatives to accelerate Bayesian computation in imaging inverse problems, focusing on its computational aspects. We introduce, as our first contribution, a highly efficient proximal Markov chain Monte Carlo (MCMC) methodology, based on a state-of-the-art approximation known as the proximal stochastic orthogonal Runge-Kutta-Chebyshev (SK-ROCK) method. It has the advantage of cleverly combining multiple gradient evaluations to significantly speed up convergence, similar to accelerated gradient optimisation techniques. We rigorously demonstrate the acceleration of the Markov chains in the 2-Wasserstein distance for Gaussian models as a function of the condition number κ. In our second contribution, we propose a more sophisticated MCMC sampler, based on the careful integration of two advanced proximal Langevin MCMC methods, SK-ROCK and split Gibbs sampling (SGS), each of which uses a unique approach to accelerate convergence. More precisely, we show how to integrate the proximal SK-ROCK sampler with the model augmentation and relaxation method used by SGS at the level of the Langevin diffusion process, to speed up Bayesian computation at the expense of asymptotic bias. This leads to a new, faster proximal SK-ROCK sampler that combines the accelerated quality of the original sampler with the computational advantages of augmentation and relaxation. Additionally, we propose the augmented and relaxed model to be considered a generalisation of the target model rather than an approximation that situates relaxation in a bias-variance trade-off. As a result, we can carefully calibrate the amount of relaxation to boost both model accuracy (as determined by model evidence) and sampler convergence speed. To achieve this, we derive an empirical Bayesian method that automatically estimates the appropriate level of relaxation via maximum marginal likelihood estimation. The proposed methodologies are demonstrated in several numerical experiments related to image deblurring, hyperspectral unmixing, tomographic reconstruction and inpainting. Comparisons with Euler-type proximal Monte Carlo approaches confirm that the Markov chains generated with our methods exhibit significantly faster convergence speeds, achieve larger effective sample sizes, and produce lower mean square estimation errors with the same computational budget
    corecore