1,425 research outputs found

    Concave Gaussian variational approximations for inference in large-scale Bayesian linear models

    Get PDF
    Two popular approaches to forming bounds in approximate Bayesian inference are local variational methods and minimal Kullback-Leibler divergence methods. For a large class of models we explicitly relate the two approaches, showing that the local variational method is equivalent to a weakened form of Kullback-Leibler Gaussian approximation. This gives a strong motivation to develop efficient methods for KL minimisation. An important and previously unproven property of the KL variational Gaussian bound is that it is a concave function in the parameters of the Gaussian for log concave sites. This observation, along with compact concave parametrisations of the covariance, enables us to develop fast scalable optimisation procedures to obtain lower bounds on the marginal likelihood in large scale Bayesian linear models. Copyright 2011 by the authors

    Maximum-a-posteriori estimation with Bayesian confidence regions

    Full text link
    Solutions to inverse problems that are ill-conditioned or ill-posed may have significant intrinsic uncertainty. Unfortunately, analysing and quantifying this uncertainty is very challenging, particularly in high-dimensional problems. As a result, while most modern mathematical imaging methods produce impressive point estimation results, they are generally unable to quantify the uncertainty in the solutions delivered. This paper presents a new general methodology for approximating Bayesian high-posterior-density credibility regions in inverse problems that are convex and potentially very high-dimensional. The approximations are derived by using recent concentration of measure results related to information theory for log-concave random vectors. A remarkable property of the approximations is that they can be computed very efficiently, even in large-scale problems, by using standard convex optimisation techniques. In particular, they are available as a by-product in problems solved by maximum-a-posteriori estimation. The approximations also have favourable theoretical properties, namely they outer-bound the true high-posterior-density credibility regions, and they are stable with respect to model dimension. The proposed methodology is illustrated on two high-dimensional imaging inverse problems related to tomographic reconstruction and sparse deconvolution, where the approximations are used to perform Bayesian hypothesis tests and explore the uncertainty about the solutions, and where proximal Markov chain Monte Carlo algorithms are used as benchmark to compute exact credible regions and measure the approximation error

    Multiple Kernel Learning: A Unifying Probabilistic Viewpoint

    Get PDF
    We present a probabilistic viewpoint to multiple kernel learning unifying well-known regularised risk approaches and recent advances in approximate Bayesian inference relaxations. The framework proposes a general objective function suitable for regression, robust regression and classification that is lower bound of the marginal likelihood and contains many regularised risk approaches as special cases. Furthermore, we derive an efficient and provably convergent optimisation algorithm

    Expectation Propagation for Nonlinear Inverse Problems -- with an Application to Electrical Impedance Tomography

    Full text link
    In this paper, we study a fast approximate inference method based on expectation propagation for exploring the posterior probability distribution arising from the Bayesian formulation of nonlinear inverse problems. It is capable of efficiently delivering reliable estimates of the posterior mean and covariance, thereby providing an inverse solution together with quantified uncertainties. Some theoretical properties of the iterative algorithm are discussed, and the efficient implementation for an important class of problems of projection type is described. The method is illustrated with one typical nonlinear inverse problem, electrical impedance tomography with complete electrode model, under sparsity constraints. Numerical results for real experimental data are presented, and compared with that by Markov chain Monte Carlo. The results indicate that the method is accurate and computationally very efficient.Comment: Journal of Computational Physics, to appea

    Large Scale Variational Bayesian Inference for Structured Scale Mixture Models

    Get PDF
    Natural image statistics exhibit hierarchical dependencies across multiple scales. Representing such prior knowledge in non-factorial latent tree models can boost performance of image denoising, inpainting, deconvolution or reconstruction substantially, beyond standard factorial "sparse" methodology. We derive a large scale approximate Bayesian inference algorithm for linear models with non-factorial (latent tree-structured) scale mixture priors. Experimental results on a range of denoising and inpainting problems demonstrate substantially improved performance compared to MAP estimation or to inference with factorial priors.Comment: Appears in Proceedings of the 29th International Conference on Machine Learning (ICML 2012
    • …
    corecore