88,124 research outputs found
Phase space sampling and operator confidence with generative adversarial networks
We demonstrate that a generative adversarial network can be trained to
produce Ising model configurations in distinct regions of phase space. In
training a generative adversarial network, the discriminator neural network
becomes very good a discerning examples from the training set and examples from
the testing set. We demonstrate that this ability can be used as an anomaly
detector, producing estimations of operator values along with a confidence in
the prediction
Text Generation Based on Generative Adversarial Nets with Latent Variable
In this paper, we propose a model using generative adversarial net (GAN) to
generate realistic text. Instead of using standard GAN, we combine variational
autoencoder (VAE) with generative adversarial net. The use of high-level latent
random variables is helpful to learn the data distribution and solve the
problem that generative adversarial net always emits the similar data. We
propose the VGAN model where the generative model is composed of recurrent
neural network and VAE. The discriminative model is a convolutional neural
network. We train the model via policy gradient. We apply the proposed model to
the task of text generation and compare it to other recent neural network based
models, such as recurrent neural network language model and SeqGAN. We evaluate
the performance of the model by calculating negative log-likelihood and the
BLEU score. We conduct experiments on three benchmark datasets, and results
show that our model outperforms other previous models
PHom-GeM: Persistent Homology for Generative Models
Generative neural network models, including Generative Adversarial Network
(GAN) and Auto-Encoders (AE), are among the most popular neural network models
to generate adversarial data. The GAN model is composed of a generator that
produces synthetic data and of a discriminator that discriminates between the
generator's output and the true data. AE consist of an encoder which maps the
model distribution to a latent manifold and of a decoder which maps the latent
manifold to a reconstructed distribution. However, generative models are known
to provoke chaotically scattered reconstructed distribution during their
training, and consequently, incomplete generated adversarial distributions.
Current distance measures fail to address this problem because they are not
able to acknowledge the shape of the data manifold, i.e. its topological
features, and the scale at which the manifold should be analyzed. We propose
Persistent Homology for Generative Models, PHom-GeM, a new methodology to
assess and measure the distribution of a generative model. PHom-GeM minimizes
an objective function between the true and the reconstructed distributions and
uses persistent homology, the study of the topological features of a space at
different spatial resolutions, to compare the nature of the true and the
generated distributions. Our experiments underline the potential of persistent
homology for Wasserstein GAN in comparison to Wasserstein AE and Variational
AE. The experiments are conducted on a real-world data set particularly
challenging for traditional distance measures and generative neural network
models. PHom-GeM is the first methodology to propose a topological distance
measure, the bottleneck distance, for generative models used to compare
adversarial samples in the context of credit card transactions
Adversarial Variational Optimization of Non-Differentiable Simulators
Complex computer simulators are increasingly used across fields of science as
generative models tying parameters of an underlying theory to experimental
observations. Inference in this setup is often difficult, as simulators rarely
admit a tractable density or likelihood function. We introduce Adversarial
Variational Optimization (AVO), a likelihood-free inference algorithm for
fitting a non-differentiable generative model incorporating ideas from
generative adversarial networks, variational optimization and empirical Bayes.
We adapt the training procedure of generative adversarial networks by replacing
the differentiable generative network with a domain-specific simulator. We
solve the resulting non-differentiable minimax problem by minimizing
variational upper bounds of the two adversarial objectives. Effectively, the
procedure results in learning a proposal distribution over simulator
parameters, such that the JS divergence between the marginal distribution of
the synthetic data and the empirical distribution of observed data is
minimized. We evaluate and compare the method with simulators producing both
discrete and continuous data.Comment: v4: Final version published at AISTATS 2019; v5: Fixed typo in Eqn 1
Generative Adversarial Trainer: Defense to Adversarial Perturbations with GAN
We propose a novel technique to make neural network robust to adversarial
examples using a generative adversarial network. We alternately train both
classifier and generator networks. The generator network generates an
adversarial perturbation that can easily fool the classifier network by using a
gradient of each image. Simultaneously, the classifier network is trained to
classify correctly both original and adversarial images generated by the
generator. These procedures help the classifier network to become more robust
to adversarial perturbations. Furthermore, our adversarial training framework
efficiently reduces overfitting and outperforms other regularization methods
such as Dropout. We applied our method to supervised learning for CIFAR
datasets, and experimantal results show that our method significantly lowers
the generalization error of the network. To the best of our knowledge, this is
the first method which uses GAN to improve supervised learning
- …
