58 research outputs found
On the regularization of Wasserstein GANs
Since their invention, generative adversarial networks (GANs) have become a
popular approach for learning to model a distribution of real (unlabeled) data.
Convergence problems during training are overcome by Wasserstein GANs which
minimize the distance between the model and the empirical distribution in terms
of a different metric, but thereby introduce a Lipschitz constraint into the
optimization problem. A simple way to enforce the Lipschitz constraint on the
class of functions, which can be modeled by the neural network, is weight
clipping. It was proposed that training can be improved by instead augmenting
the loss by a regularization term that penalizes the deviation of the gradient
of the critic (as a function of the network's input) from one. We present
theoretical arguments why using a weaker regularization term enforcing the
Lipschitz constraint is preferable. These arguments are supported by
experimental results on toy data sets.Comment: Published as a conference paper at ICLR 2018. * Henning Petzka and
Asja Fischer contributed equally to this work (11 pages +13 pages appendix
- …