1,984 research outputs found
Fast Convergent Algorithms for Expectation Propagation Approximate Bayesian Inference
We propose a novel algorithm to solve the expectation propagation relaxation
of Bayesian inference for continuous-variable graphical models. In contrast to
most previous algorithms, our method is provably convergent. By marrying
convergent EP ideas from (Opper&Winther 05) with covariance decoupling
techniques (Wipf&Nagarajan 08, Nickisch&Seeger 09), it runs at least an order
of magnitude faster than the most commonly used EP solver.Comment: 16 pages, 3 figures, submitted for conference publicatio
Expectation Propagation for Nonlinear Inverse Problems -- with an Application to Electrical Impedance Tomography
In this paper, we study a fast approximate inference method based on
expectation propagation for exploring the posterior probability distribution
arising from the Bayesian formulation of nonlinear inverse problems. It is
capable of efficiently delivering reliable estimates of the posterior mean and
covariance, thereby providing an inverse solution together with quantified
uncertainties. Some theoretical properties of the iterative algorithm are
discussed, and the efficient implementation for an important class of problems
of projection type is described. The method is illustrated with one typical
nonlinear inverse problem, electrical impedance tomography with complete
electrode model, under sparsity constraints. Numerical results for real
experimental data are presented, and compared with that by Markov chain Monte
Carlo. The results indicate that the method is accurate and computationally
very efficient.Comment: Journal of Computational Physics, to appea
Cluster Variation Method in Statistical Physics and Probabilistic Graphical Models
The cluster variation method (CVM) is a hierarchy of approximate variational
techniques for discrete (Ising--like) models in equilibrium statistical
mechanics, improving on the mean--field approximation and the Bethe--Peierls
approximation, which can be regarded as the lowest level of the CVM. In recent
years it has been applied both in statistical physics and to inference and
optimization problems formulated in terms of probabilistic graphical models.
The foundations of the CVM are briefly reviewed, and the relations with
similar techniques are discussed. The main properties of the method are
considered, with emphasis on its exactness for particular models and on its
asymptotic properties.
The problem of the minimization of the variational free energy, which arises
in the CVM, is also addressed, and recent results about both provably
convergent and message-passing algorithms are discussed.Comment: 36 pages, 17 figure
Scalable Bayesian inversion with Poisson data
Poisson data arise in many important inverse problems, e.g., medical imaging. The stochastic nature of noisy observation processes and imprecise prior information implies that there exists an ensemble of solutions consistent with the given Poisson data to various extents. Existing approaches, e.g., maximum likelihood and penalised maximum likelihood, incorporate the statistical information for point estimates, but fail to provide the important uncertainty information of various possible solu- tions. While full Bayesian approaches can solve this problem, the posterior distributions are often intractable due to their complicated form and the curse of dimensionality. In this thesis, we investigate approximate Bayesian inference techniques, i.e., variational inference (VI), expectation propagation (EP) and Bayesian deep learning (BDL), for scalable posterior exploration. The scalability relies on leveraging 1) mathematical structures emerging in the problems, i.e., the low rank structure of forward operators and the rank 1 projection form of factors in the posterior distribution, and 2) efficient feed forward processes of neural networks and further reduced training time by flexibility of dimensions with incorporating forward and adjoint operators. Apart from the scalability, we also address theoretical analysis, algorithmic design and practical implementation. For VI, we derive explicit functional form and analyse the convergence of algorithms, which are long-standing problems in the literature. For EP, we discuss how to incorporate nonnegative constraints and how to design stable moment evaluation schemes, which are vital and nontrivial practical concerns. For BDL, specifically conditional variational auto-encoders (CVAEs), we investigate how to apply them for uncertainty quantification of inverse problems and develop flexible and novel frameworks for general Bayesian Inversion. Finally, we justify these contributions with numerical experiments and show the competitiveness of our proposed methods by comparing with state-of-the-art benchmarks
- …