90,653 research outputs found
Adversarial Variational Optimization of Non-Differentiable Simulators
Complex computer simulators are increasingly used across fields of science as
generative models tying parameters of an underlying theory to experimental
observations. Inference in this setup is often difficult, as simulators rarely
admit a tractable density or likelihood function. We introduce Adversarial
Variational Optimization (AVO), a likelihood-free inference algorithm for
fitting a non-differentiable generative model incorporating ideas from
generative adversarial networks, variational optimization and empirical Bayes.
We adapt the training procedure of generative adversarial networks by replacing
the differentiable generative network with a domain-specific simulator. We
solve the resulting non-differentiable minimax problem by minimizing
variational upper bounds of the two adversarial objectives. Effectively, the
procedure results in learning a proposal distribution over simulator
parameters, such that the JS divergence between the marginal distribution of
the synthetic data and the empirical distribution of observed data is
minimized. We evaluate and compare the method with simulators producing both
discrete and continuous data.Comment: v4: Final version published at AISTATS 2019; v5: Fixed typo in Eqn 1
Wasserstein Introspective Neural Networks
We present Wasserstein introspective neural networks (WINN) that are both a
generator and a discriminator within a single model. WINN provides a
significant improvement over the recent introspective neural networks (INN)
method by enhancing INN's generative modeling capability. WINN has three
interesting properties: (1) A mathematical connection between the formulation
of the INN algorithm and that of Wasserstein generative adversarial networks
(WGAN) is made. (2) The explicit adoption of the Wasserstein distance into INN
results in a large enhancement to INN, achieving compelling results even with a
single classifier --- e.g., providing nearly a 20 times reduction in model size
over INN for unsupervised generative modeling. (3) When applied to supervised
classification, WINN also gives rise to improved robustness against adversarial
examples in terms of the error reduction. In the experiments, we report
encouraging results on unsupervised learning problems including texture, face,
and object modeling, as well as a supervised classification task against
adversarial attacks.Comment: Accepted to CVPR 2018 (Oral
Bidirectional Conditional Generative Adversarial Networks
Conditional Generative Adversarial Networks (cGANs) are generative models
that can produce data samples () conditioned on both latent variables ()
and known auxiliary information (). We propose the Bidirectional cGAN
(BiCoGAN), which effectively disentangles and in the generation process
and provides an encoder that learns inverse mappings from to both and
, trained jointly with the generator and the discriminator. We present
crucial techniques for training BiCoGANs, which involve an extrinsic factor
loss along with an associated dynamically-tuned importance weight. As compared
to other encoder-based cGANs, BiCoGANs encode more accurately, and utilize
and more effectively and in a more disentangled way to generate
samples.Comment: To appear in Proceedings of ACCV 201
- …
