56 research outputs found
Generative Adversarial Positive-Unlabelled Learning
In this work, we consider the task of classifying binary positive-unlabeled
(PU) data. The existing discriminative learning based PU models attempt to seek
an optimal reweighting strategy for U data, so that a decent decision boundary
can be found. However, given limited P data, the conventional PU models tend to
suffer from overfitting when adapted to very flexible deep neural networks. In
contrast, we are the first to innovate a totally new paradigm to attack the
binary PU task, from perspective of generative learning by leveraging the
powerful generative adversarial networks (GAN). Our generative
positive-unlabeled (GenPU) framework incorporates an array of discriminators
and generators that are endowed with different roles in simultaneously
producing positive and negative realistic samples. We provide theoretical
analysis to justify that, at equilibrium, GenPU is capable of recovering both
positive and negative data distributions. Moreover, we show GenPU is
generalizable and closely related to the semi-supervised classification. Given
rather limited P data, experiments on both synthetic and real-world dataset
demonstrate the effectiveness of our proposed framework. With infinite
realistic and diverse sample streams generated from GenPU, a very flexible
classifier can then be trained using deep neural networks.Comment: 8 page
Spatial Evolutionary Generative Adversarial Networks
Generative adversary networks (GANs) suffer from training pathologies such as
instability and mode collapse. These pathologies mainly arise from a lack of
diversity in their adversarial interactions. Evolutionary generative
adversarial networks apply the principles of evolutionary computation to
mitigate these problems. We hybridize two of these approaches that promote
training diversity. One, E-GAN, at each batch, injects mutation diversity by
training the (replicated) generator with three independent objective functions
then selecting the resulting best performing generator for the next batch. The
other, Lipizzaner, injects population diversity by training a two-dimensional
grid of GANs with a distributed evolutionary algorithm that includes neighbor
exchanges of additional training adversaries, performance based selection and
population-based hyper-parameter tuning. We propose to combine mutation and
population approaches to diversity improvement. We contribute a superior
evolutionary GANs training method, Mustangs, that eliminates the single loss
function used across Lipizzaner's grid. Instead, each training round, a loss
function is selected with equal probability, from among the three E-GAN uses.
Experimental analyses on standard benchmarks, MNIST and CelebA, demonstrate
that Mustangs provides a statistically faster training method resulting in more
accurate networks
Pose Manipulation with Identity Preservation
This paper describes a new model which generates images in novel poses e.g. by altering face expression and orientation, from just a few instances of a human subject. Unlike previous approaches which require large datasets of a specific person for training, our approach may start from a scarce set of images, even from a single image. To this end, we introduce Character Adaptive Identity Normalization GAN (CainGAN) which uses spatial characteristic features extracted by an embedder and combined across source images. The identity information is propagated throughout the network by applying conditional normalization. After extensive adversarial training, CainGAN receives figures of faces from a certain individual and produces new ones while preserving the person’s identity. Experimental results show that the quality of generated images scales with the size of the input set used during inference. Furthermore, quantitative measurements indicate that CainGAN performs better compared to other methods when training data is limited
- …