522 research outputs found
Deep Unfolding with Normalizing Flow Priors for Inverse Problems
Many application domains, spanning from computational photography to medical
imaging, require recovery of high-fidelity images from noisy, incomplete or
partial/compressed measurements. State of the art methods for solving these
inverse problems combine deep learning with iterative model-based solvers, a
concept known as deep algorithm unfolding. By combining a-priori knowledge of
the forward measurement model with learned (proximal) mappings based on deep
networks, these methods yield solutions that are both physically feasible
(data-consistent) and perceptually plausible. However, current proximal
mappings only implicitly learn such image priors. In this paper, we propose to
make these image priors fully explicit by embedding deep generative models in
the form of normalizing flows within the unfolded proximal gradient algorithm.
We demonstrate that the proposed method outperforms competitive baselines on
various image recovery tasks, spanning from image denoising to inpainting and
deblurring
Deep Probabilistic Imaging: Uncertainty Quantification and Multi-modal Solution Characterization for Computational Imaging
Computational image reconstruction algorithms generally produce a single
image without any measure of uncertainty or confidence. Regularized Maximum
Likelihood (RML) and feed-forward deep learning approaches for inverse problems
typically focus on recovering a point estimate. This is a serious limitation
when working with underdetermined imaging systems, where it is conceivable that
multiple image modes would be consistent with the measured data. Characterizing
the space of probable images that explain the observational data is therefore
crucial. In this paper, we propose a variational deep probabilistic imaging
approach to quantify reconstruction uncertainty. Deep Probabilistic Imaging
(DPI) employs an untrained deep generative model to estimate a posterior
distribution of an unobserved image. This approach does not require any
training data; instead, it optimizes the weights of a neural network to
generate image samples that fit a particular measurement dataset. Once the
network weights have been learned, the posterior distribution can be
efficiently sampled. We demonstrate this approach in the context of
interferometric radio imaging, which is used for black hole imaging with the
Event Horizon Telescope, and compressed sensing Magnetic Resonance Imaging
(MRI).Comment: This paper has been accepted to AAAI 2021. Keywords: Computational
Imaging, Normalizing Flow, Uncertainty Quantification, Interferometry, MR
Recommended from our members
Deep Learning for Inverse Problems (hybrid meeting)
Machine learning and in particular deep learning offer several data-driven methods to amend the typical shortcomings of purely analytical approaches. The mathematical research on these combined models is presently exploding on the experimental side but still lacking on the theoretical point of view. This workshop addresses the challenge of developing a solid mathematical theory for analyzing deep neural networks for inverse problems
- …