4 research outputs found
Solving inverse problems via auto-encoders
Compressed sensing (CS) is about recovering a structured signal from its
under-determined linear measurements. Starting from sparsity, recovery methods
have steadily moved towards more complex structures. Emerging machine learning
tools such as generative functions that are based on neural networks are able
to learn general complex structures from training data. This makes them
potentially powerful tools for designing CS algorithms. Consider a desired
class of signals , , and a corresponding
generative function , ,
such that . A recovery method based on
seeks with minimum measurement error. In this paper, the
performance of such a recovery method is studied, under both noisy and
noiseless measurements. In the noiseless case, roughly speaking, it is proven
that, as and grow without bound and converges to zero, if the
number of measurements () is larger than the input dimension of the
generative model (), then asymptotically, almost lossless recovery is
possible. Furthermore, the performance of an efficient iterative algorithm
based on projected gradient descent is studied. In this case, an auto-encoder
is used to define and enforce the source structure at the projection step. The
auto-encoder is defined by encoder and decoder (generative) functions
and , respectively. We
theoretically prove that, roughly, given
measurements, such an algorithm converges to the vicinity of the desired
result, even in the presence of additive white Gaussian noise. Numerical
results exploring the effectiveness of the proposed method are presented
Deep Learning Techniques for Inverse Problems in Imaging
Recent work in machine learning shows that deep neural networks can be used
to solve a wide variety of inverse problems arising in computational imaging.
We explore the central prevailing themes of this emerging area and present a
taxonomy that can be used to categorize different problems and reconstruction
methods. Our taxonomy is organized along two central axes: (1) whether or not a
forward model is known and to what extent it is used in training and testing,
and (2) whether or not the learning is supervised or unsupervised, i.e.,
whether or not the training relies on access to matched ground truth image and
measurement pairs. We also discuss the trade-offs associated with these
different reconstruction approaches, caveats and common failure modes, plus
open problems and avenues for future work
Robust compressed sensing of generative models
The goal of compressed sensing is to estimate a high dimensional vector from
an underdetermined system of noisy linear equations. In analogy to classical
compressed sensing, here we assume a generative model as a prior, that is, we
assume the vector is represented by a deep generative model . Classical recovery approaches such as empirical risk
minimization (ERM) are guaranteed to succeed when the measurement matrix is
sub-Gaussian. However, when the measurement matrix and measurements are
heavy-tailed or have outliers, recovery may fail dramatically. In this paper we
propose an algorithm inspired by the Median-of-Means (MOM). Our algorithm
guarantees recovery for heavy-tailed data, even in the presence of outliers.
Theoretically, our results show our novel MOM-based algorithm enjoys the same
sample complexity guarantees as ERM under sub-Gaussian assumptions. Our
experiments validate both aspects of our claims: other algorithms are indeed
fragile and fail under heavy-tailed and/or corrupted data, while our approach
exhibits the predicted robustness
Instance-Optimal Compressed Sensing via Posterior Sampling
We characterize the measurement complexity of compressed sensing of signals
drawn from a known prior distribution, even when the support of the prior is
the entire space (rather than, say, sparse vectors). We show for Gaussian
measurements and \emph{any} prior distribution on the signal, that the
posterior sampling estimator achieves near-optimal recovery guarantees.
Moreover, this result is robust to model mismatch, as long as the distribution
estimate (e.g., from an invertible generative model) is close to the true
distribution in Wasserstein distance. We implement the posterior sampling
estimator for deep generative priors using Langevin dynamics, and empirically
find that it produces accurate estimates with more diversity than MAP