9,175 research outputs found
Task adapted reconstruction for inverse problems
The paper considers the problem of performing a task defined on a model
parameter that is only observed indirectly through noisy data in an ill-posed
inverse problem. A key aspect is to formalize the steps of reconstruction and
task as appropriate estimators (non-randomized decision rules) in statistical
estimation problems. The implementation makes use of (deep) neural networks to
provide a differentiable parametrization of the family of estimators for both
steps. These networks are combined and jointly trained against suitable
supervised training data in order to minimize a joint differentiable loss
function, resulting in an end-to-end task adapted reconstruction method. The
suggested framework is generic, yet adaptable, with a plug-and-play structure
for adjusting both the inverse problem and the task at hand. More precisely,
the data model (forward operator and statistical model of the noise) associated
with the inverse problem is exchangeable, e.g., by using neural network
architecture given by a learned iterative method. Furthermore, any task that is
encodable as a trainable neural network can be used. The approach is
demonstrated on joint tomographic image reconstruction, classification and
joint tomographic image reconstruction segmentation
Bregman Cost for Non-Gaussian Noise
One of the tasks of the Bayesian inverse problem is to find a good estimate
based on the posterior probability density. The most common point estimators
are the conditional mean (CM) and maximum a posteriori (MAP) estimates, which
correspond to the mean and the mode of the posterior, respectively. From a
theoretical point of view it has been argued that the MAP estimate is only in
an asymptotic sense a Bayes estimator for the uniform cost function, while the
CM estimate is a Bayes estimator for the means squared cost function. Recently,
it has been proven that the MAP estimate is a proper Bayes estimator for the
Bregman cost if the image is corrupted by Gaussian noise. In this work we
extend this result to other noise models with log-concave likelihood density,
by introducing two related Bregman cost functions for which the CM and the MAP
estimates are proper Bayes estimators. Moreover, we also prove that the CM
estimate outperforms the MAP estimate, when the error is measured in a certain
Bregman distance, a result previously unknown also in the case of additive
Gaussian noise
Variational Hamiltonian Monte Carlo via Score Matching
Traditionally, the field of computational Bayesian statistics has been
divided into two main subfields: variational methods and Markov chain Monte
Carlo (MCMC). In recent years, however, several methods have been proposed
based on combining variational Bayesian inference and MCMC simulation in order
to improve their overall accuracy and computational efficiency. This marriage
of fast evaluation and flexible approximation provides a promising means of
designing scalable Bayesian inference methods. In this paper, we explore the
possibility of incorporating variational approximation into a state-of-the-art
MCMC method, Hamiltonian Monte Carlo (HMC), to reduce the required gradient
computation in the simulation of Hamiltonian flow, which is the bottleneck for
many applications of HMC in big data problems. To this end, we use a {\it
free-form} approximation induced by a fast and flexible surrogate function
based on single-hidden layer feedforward neural networks. The surrogate
provides sufficiently accurate approximation while allowing for fast
exploration of parameter space, resulting in an efficient approximate inference
algorithm. We demonstrate the advantages of our method on both synthetic and
real data problems
Inverse Problems and Data Assimilation
These notes are designed with the aim of providing a clear and concise
introduction to the subjects of Inverse Problems and Data Assimilation, and
their inter-relations, together with citations to some relevant literature in
this area. The first half of the notes is dedicated to studying the Bayesian
framework for inverse problems. Techniques such as importance sampling and
Markov Chain Monte Carlo (MCMC) methods are introduced; these methods have the
desirable property that in the limit of an infinite number of samples they
reproduce the full posterior distribution. Since it is often computationally
intensive to implement these methods, especially in high dimensional problems,
approximate techniques such as approximating the posterior by a Dirac or a
Gaussian distribution are discussed. The second half of the notes cover data
assimilation. This refers to a particular class of inverse problems in which
the unknown parameter is the initial condition of a dynamical system, and in
the stochastic dynamics case the subsequent states of the system, and the data
comprises partial and noisy observations of that (possibly stochastic)
dynamical system. We will also demonstrate that methods developed in data
assimilation may be employed to study generic inverse problems, by introducing
an artificial time to generate a sequence of probability measures interpolating
from the prior to the posterior
Variational data assimilation using targetted random walks
The variational approach to data assimilation is a widely used methodology for both online prediction and for reanalysis (offline hindcasting). In either of these scenarios it can be important to assess uncertainties in the assimilated state. Ideally it would be desirable to have complete information concerning the Bayesian posterior distribution for unknown state, given data. The purpose of this paper is to show that complete computational probing of this posterior distribution is now within reach in the offline situation. In this paper we will introduce an MCMC method which enables us to directly sample from the Bayesian\ud
posterior distribution on the unknown functions of interest, given observations. Since we are aware that these\ud
methods are currently too computationally expensive to consider using in an online filtering scenario, we frame this in the context of offline reanalysis. Using a simple random walk-type MCMC method, we are able to characterize the posterior distribution using only evaluations of the forward model of the problem, and of the model and data mismatch. No adjoint model is required for the method we use; however more sophisticated MCMC methods are available\ud
which do exploit derivative information. For simplicity of exposition we consider the problem of assimilating data, either Eulerian or Lagrangian, into a low Reynolds number (Stokes flow) scenario in a two dimensional periodic geometry. We will show that in many cases it is possible to recover the initial condition and model error (which we describe as unknown forcing to the model) from data, and that with increasing amounts of informative data, the uncertainty in our estimations reduces
Bayesian Deep Net GLM and GLMM
Deep feedforward neural networks (DFNNs) are a powerful tool for functional
approximation. We describe flexible versions of generalized linear and
generalized linear mixed models incorporating basis functions formed by a DFNN.
The consideration of neural networks with random effects is not widely used in
the literature, perhaps because of the computational challenges of
incorporating subject specific parameters into already complex models.
Efficient computational methods for high-dimensional Bayesian inference are
developed using Gaussian variational approximation, with a parsimonious but
flexible factor parametrization of the covariance matrix. We implement natural
gradient methods for the optimization, exploiting the factor structure of the
variational covariance matrix in computation of the natural gradient. Our
flexible DFNN models and Bayesian inference approach lead to a regression and
classification method that has a high prediction accuracy, and is able to
quantify the prediction uncertainty in a principled and convenient way. We also
describe how to perform variable selection in our deep learning method. The
proposed methods are illustrated in a wide range of simulated and real-data
examples, and the results compare favourably to a state of the art flexible
regression and classification method in the statistical literature, the
Bayesian additive regression trees (BART) method. User-friendly software
packages in Matlab, R and Python implementing the proposed methods are
available at https://github.com/VBayesLabComment: 35 pages, 7 figure, 10 table
- …