28,393 research outputs found
Quantifying Model Uncertainty in Inverse Problems via Bayesian Deep Gradient Descent
Recent advances in reconstruction methods for inverse problems leverage
powerful data-driven models, e.g., deep neural networks. These techniques have
demonstrated state-of-the-art performances for several imaging tasks, but they
often do not provide uncertainty on the obtained reconstructions. In this work,
we develop a novel scalable data-driven knowledge-aided computational framework
to quantify the model uncertainty via Bayesian neural networks. The approach
builds on and extends deep gradient descent, a recently developed greedy
iterative training scheme, and recasts it within a probabilistic framework.
Scalability is achieved by being hybrid in the architecture: only the last
layer of each block is Bayesian, while the others remain deterministic, and by
being greedy in training. The framework is showcased on one representative
medical imaging modality, viz. computed tomography with either sparse view or
limited view data, and exhibits competitive performance with respect to
state-of-the-art benchmarks, e.g., total variation, deep gradient descent and
learned primal-dual.Comment: 8 pages, 6 figure
Preconditioned Stochastic Gradient Langevin Dynamics for Deep Neural Networks
Effective training of deep neural networks suffers from two main issues. The
first is that the parameter spaces of these models exhibit pathological
curvature. Recent methods address this problem by using adaptive
preconditioning for Stochastic Gradient Descent (SGD). These methods improve
convergence by adapting to the local geometry of parameter space. A second
issue is overfitting, which is typically addressed by early stopping. However,
recent work has demonstrated that Bayesian model averaging mitigates this
problem. The posterior can be sampled by using Stochastic Gradient Langevin
Dynamics (SGLD). However, the rapidly changing curvature renders default SGLD
methods inefficient. Here, we propose combining adaptive preconditioners with
SGLD. In support of this idea, we give theoretical properties on asymptotic
convergence and predictive risk. We also provide empirical results for Logistic
Regression, Feedforward Neural Nets, and Convolutional Neural Nets,
demonstrating that our preconditioned SGLD method gives state-of-the-art
performance on these models.Comment: AAAI 201
- …