3,124 research outputs found
Quantifying Model Uncertainty in Inverse Problems via Bayesian Deep Gradient Descent
Recent advances in reconstruction methods for inverse problems leverage
powerful data-driven models, e.g., deep neural networks. These techniques have
demonstrated state-of-the-art performances for several imaging tasks, but they
often do not provide uncertainty on the obtained reconstructions. In this work,
we develop a novel scalable data-driven knowledge-aided computational framework
to quantify the model uncertainty via Bayesian neural networks. The approach
builds on and extends deep gradient descent, a recently developed greedy
iterative training scheme, and recasts it within a probabilistic framework.
Scalability is achieved by being hybrid in the architecture: only the last
layer of each block is Bayesian, while the others remain deterministic, and by
being greedy in training. The framework is showcased on one representative
medical imaging modality, viz. computed tomography with either sparse view or
limited view data, and exhibits competitive performance with respect to
state-of-the-art benchmarks, e.g., total variation, deep gradient descent and
learned primal-dual.Comment: 8 pages, 6 figure
Uncertainty quantification in medical image synthesis
Machine learning approaches to medical image synthesis have shown
outstanding performance, but often do not convey uncertainty information. In this chapter, we survey uncertainty quantification methods in
medical image synthesis and advocate the use of uncertainty for improving clinicians’ trust in machine learning solutions. First, we describe basic
concepts in uncertainty quantification and discuss its potential benefits in
downstream applications. We then review computational strategies that
facilitate inference, and identify the main technical and clinical challenges.
We provide a first comprehensive review to inform how to quantify, communicate and use uncertainty in medical synthesis applications
Unsupervised Knowledge-Transfer for Learned Image Reconstruction
Deep learning-based image reconstruction approaches have demonstrated
impressive empirical performance in many imaging modalities. These approaches
generally require a large amount of high-quality training data, which is often
not available. To circumvent this issue, we develop a novel unsupervised
knowledge-transfer paradigm for learned iterative reconstruction within a
Bayesian framework. The proposed approach learns an iterative reconstruction
network in two phases. The first phase trains a reconstruction network with a
set of ordered pairs comprising of ground truth images and measurement data.
The second phase fine-tunes the pretrained network to the measurement data
without supervision. Furthermore, the framework delivers uncertainty
information over the reconstructed image. We present extensive experimental
results on low-dose and sparse-view computed tomography, showing that the
proposed framework significantly improves reconstruction quality not only
visually, but also quantitatively in terms of PSNR and SSIM, and is competitive
with several state-of-the-art supervised and unsupervised reconstruction
techniques
Discovering and forecasting extreme events via active learning in neural operators
Extreme events in society and nature, such as pandemic spikes or rogue waves,
can have catastrophic consequences. Characterizing extremes is difficult as
they occur rarely, arise from seemingly benign conditions, and belong to
complex and often unknown infinite-dimensional systems. Such challenges render
attempts at characterizing them as moot. We address each of these difficulties
by combining novel training schemes in Bayesian experimental design (BED) with
an ensemble of deep neural operators (DNOs). This model-agnostic framework
pairs a BED scheme that actively selects data for quantifying extreme events
with an ensemble of DNOs that approximate infinite-dimensional nonlinear
operators. We find that not only does this framework clearly beat Gaussian
processes (GPs) but that 1) shallow ensembles of just two members perform best;
2) extremes are uncovered regardless of the state of initial data (i.e. with or
without extremes); 3) our method eliminates "double-descent" phenomena; 4) the
use of batches of suboptimal acquisition points compared to step-by-step global
optima does not hinder BED performance; and 5) Monte Carlo acquisition
outperforms standard minimizers in high-dimensions. Together these conclusions
form the foundation of an AI-assisted experimental infrastructure that can
efficiently infer and pinpoint critical situations across many domains, from
physical to societal systems.Comment: 19 pages, 7 figures, Submitted to Nature Computational Scienc
Task adapted reconstruction for inverse problems
The paper considers the problem of performing a task defined on a model
parameter that is only observed indirectly through noisy data in an ill-posed
inverse problem. A key aspect is to formalize the steps of reconstruction and
task as appropriate estimators (non-randomized decision rules) in statistical
estimation problems. The implementation makes use of (deep) neural networks to
provide a differentiable parametrization of the family of estimators for both
steps. These networks are combined and jointly trained against suitable
supervised training data in order to minimize a joint differentiable loss
function, resulting in an end-to-end task adapted reconstruction method. The
suggested framework is generic, yet adaptable, with a plug-and-play structure
for adjusting both the inverse problem and the task at hand. More precisely,
the data model (forward operator and statistical model of the noise) associated
with the inverse problem is exchangeable, e.g., by using neural network
architecture given by a learned iterative method. Furthermore, any task that is
encodable as a trainable neural network can be used. The approach is
demonstrated on joint tomographic image reconstruction, classification and
joint tomographic image reconstruction segmentation
Conditional Variational Autoencoder for Learned Image Reconstruction
Learned image reconstruction techniques using deep neural networks have recently gained popularity and have delivered promising empirical results. However, most approaches focus on one single recovery for each observation, and thus neglect information uncertainty. In this work, we develop a novel computational framework that approximates the posterior distribution of the unknown image at each query observation. The proposed framework is very flexible: it handles implicit noise models and priors, it incorporates the data formation process (i.e., the forward operator), and the learned reconstructive properties are transferable between different datasets. Once the network is trained using the conditional variational autoencoder loss, it provides a computationally efficient sampler for the approximate posterior distribution via feed-forward propagation, and the summarizing statistics of the generated samples are used for both point-estimation and uncertainty quantification. We illustrate the proposed framework with extensive numerical experiments on positron emission tomography (with both moderate and low-count levels) showing that the framework generates high-quality samples when compared with state-of-the-art methods
- …