In the recent years, there has been a significant improvement in the quality
of samples produced by (deep) generative models such as variational
auto-encoders and generative adversarial networks. However, the representation
capabilities of these methods still do not capture the full distribution for
complex classes of images, such as human faces. This deficiency has been
clearly observed in previous works that use pre-trained generative models to
solve imaging inverse problems. In this paper, we suggest to mitigate the
limited representation capabilities of generators by making them image-adaptive
and enforcing compliance of the restoration with the observations via
back-projections. We empirically demonstrate the advantages of our proposed
approach for image super-resolution and compressed sensing.Comment: Accepted to AAAI 2020. Code available at
https://github.com/shadyabh/IAGA