143 research outputs found
Spatial Evolutionary Generative Adversarial Networks
Generative adversary networks (GANs) suffer from training pathologies such as
instability and mode collapse. These pathologies mainly arise from a lack of
diversity in their adversarial interactions. Evolutionary generative
adversarial networks apply the principles of evolutionary computation to
mitigate these problems. We hybridize two of these approaches that promote
training diversity. One, E-GAN, at each batch, injects mutation diversity by
training the (replicated) generator with three independent objective functions
then selecting the resulting best performing generator for the next batch. The
other, Lipizzaner, injects population diversity by training a two-dimensional
grid of GANs with a distributed evolutionary algorithm that includes neighbor
exchanges of additional training adversaries, performance based selection and
population-based hyper-parameter tuning. We propose to combine mutation and
population approaches to diversity improvement. We contribute a superior
evolutionary GANs training method, Mustangs, that eliminates the single loss
function used across Lipizzaner's grid. Instead, each training round, a loss
function is selected with equal probability, from among the three E-GAN uses.
Experimental analyses on standard benchmarks, MNIST and CelebA, demonstrate
that Mustangs provides a statistically faster training method resulting in more
accurate networks
Semi-supervised generative adversarial networks with spatial coevolution for enhanced image generation and classification
Funding for open access charge: Universidad de Málaga / CBU
COEGAN: Evaluating the Coevolution Effect in Generative Adversarial Networks
Generative adversarial networks (GAN) present state-of-the-art results in the
generation of samples following the distribution of the input dataset. However,
GANs are difficult to train, and several aspects of the model should be
previously designed by hand. Neuroevolution is a well-known technique used to
provide the automatic design of network architectures which was recently
expanded to deep neural networks. COEGAN is a model that uses neuroevolution
and coevolution in the GAN training algorithm to provide a more stable training
method and the automatic design of neural network architectures. COEGAN makes
use of the adversarial aspect of the GAN components to implement coevolutionary
strategies in the training algorithm. Our proposal was evaluated in the
Fashion-MNIST and MNIST dataset. We compare our results with a baseline based
on DCGAN and also with results from a random search algorithm. We show that our
method is able to discover efficient architectures in the Fashion-MNIST and
MNIST datasets. The results also suggest that COEGAN can be used as a training
algorithm for GANs to avoid common issues, such as the mode collapse problem.Comment: Published in GECCO 2019. arXiv admin note: text overlap with
arXiv:1912.0617
Neuroevolutionary Training of Deep Convolutional Generative Adversarial Networks
Recent developments in Deep Learning are noteworthy when it comes to learning the probability distribution of points through neural networks, and one of the crucial parts for such progress is because of Generative Adversarial Networks (GANs). In GANs, two neural networks, Generator and Discriminator, compete amongst each other to learn the probability distribution of points in visual pictures. A lot of research has been conducted to overcome the challenges of GANs which include training instability, mode collapse and vanishing gradient. However, there was no significant proof found on whether modern techniques consistently outperform vanilla GANs, and it turns out that different advanced techniques distinctively perform on different datasets. In this thesis, we propose two neuroevolutionary training techniques for deep convolutional GANs. We evolve the deep GANs architecture in low data regime. Using Fréchet Inception Distance (FID) score as the fitness function, we select the best deep convolutional topography generated by the evolutionary algorithm. The parameters of the best-selected individuals are maintained throughout the generations, and we continue to train the population until individuals demonstrate convergence. We compare our approach with the Vanilla GANs, Deep Convolutional GANs and COEGAN. Our experiments show that an evolutionary algorithm-based training technique gives a lower FID score than those of benchmark models. A lower FID score results in better image quality and diversity in the generated images
- …