13 research outputs found
Parameterizing uncertainty by deep invertible networks, an application to reservoir characterization
Uncertainty quantification for full-waveform inversion provides a
probabilistic characterization of the ill-conditioning of the problem,
comprising the sensitivity of the solution with respect to the starting model
and data noise. This analysis allows to assess the confidence in the candidate
solution and how it is reflected in the tasks that are typically performed
after imaging (e.g., stratigraphic segmentation following reservoir
characterization). Classically, uncertainty comes in the form of a probability
distribution formulated from Bayesian principles, from which we seek to obtain
samples. A popular solution involves Monte Carlo sampling. Here, we propose
instead an approach characterized by training a deep network that "pushes
forward" Gaussian random inputs into the model space (representing, for
example, density or velocity) as if they were sampled from the actual posterior
distribution. Such network is designed to solve a variational optimization
problem based on the Kullback-Leibler divergence between the posterior and the
network output distributions. This work is fundamentally rooted in recent
developments for invertible networks. Special invertible architectures, besides
being computational advantageous with respect to traditional networks, do also
enable analytic computation of the output density function. Therefore, after
training, these networks can be readily used as a new prior for a related
inversion problem. This stands in stark contrast with Monte-Carlo methods,
which only produce samples. We validate these ideas with an application to
angle-versus-ray parameter analysis for reservoir characterization
Improving GAN Training with Probability Ratio Clipping and Sample Reweighting
Despite success on a wide range of problems related to vision, generative
adversarial networks (GANs) often suffer from inferior performance due to
unstable training, especially for text generation. To solve this issue, we
propose a new variational GAN training framework which enjoys superior training
stability. Our approach is inspired by a connection of GANs and reinforcement
learning under a variational perspective. The connection leads to (1)
probability ratio clipping that regularizes generator training to prevent
excessively large updates, and (2) a sample re-weighting mechanism that
improves discriminator training by downplaying bad-quality fake samples.
Moreover, our variational GAN framework can provably overcome the training
issue in many GANs that an optimal discriminator cannot provide any informative
gradient to training generator. By plugging the training approach in diverse
state-of-the-art GAN architectures, we obtain significantly improved
performance over a range of tasks, including text generation, text style
transfer, and image generation.Comment: NeurIPS 2020 camera ready version (citations updated