1 research outputs found
Selective Sampling and Mixture Models in Generative Adversarial Networks
In this paper, we propose a multi-generator extension to the adversarial
training framework, in which the objective of each generator is to represent a
unique component of a target mixture distribution. In the training phase, the
generators cooperate to represent, as a mixture, the target distribution while
maintaining distinct manifolds. As opposed to traditional generative models,
inference from a particular generator after training resembles selective
sampling from a unique component in the target distribution. We demonstrate the
feasibility of the proposed architecture both analytically and with basic
Multi-Layer Perceptron (MLP) models trained on the MNIST dataset.Comment: 5pages, 3 figure