5 research outputs found

    Neuroevolutionary Training of Deep Convolutional Generative Adversarial Networks

    Get PDF
    Recent developments in Deep Learning are noteworthy when it comes to learning the probability distribution of points through neural networks, and one of the crucial parts for such progress is because of Generative Adversarial Networks (GANs). In GANs, two neural networks, Generator and Discriminator, compete amongst each other to learn the probability distribution of points in visual pictures. A lot of research has been conducted to overcome the challenges of GANs which include training instability, mode collapse and vanishing gradient. However, there was no significant proof found on whether modern techniques consistently outperform vanilla GANs, and it turns out that different advanced techniques distinctively perform on different datasets. In this thesis, we propose two neuroevolutionary training techniques for deep convolutional GANs. We evolve the deep GANs architecture in low data regime. Using Fréchet Inception Distance (FID) score as the fitness function, we select the best deep convolutional topography generated by the evolutionary algorithm. The parameters of the best-selected individuals are maintained throughout the generations, and we continue to train the population until individuals demonstrate convergence. We compare our approach with the Vanilla GANs, Deep Convolutional GANs and COEGAN. Our experiments show that an evolutionary algorithm-based training technique gives a lower FID score than those of benchmark models. A lower FID score results in better image quality and diversity in the generated images

    Evolutionary Game Theory Squared: Evolving Agents in Endogenously Evolving Zero-Sum Games

    Full text link
    The predominant paradigm in evolutionary game theory and more generally online learning in games is based on a clear distinction between a population of dynamic agents that interact given a fixed, static game. In this paper, we move away from the artificial divide between dynamic agents and static games, to introduce and analyze a large class of competitive settings where both the agents and the games they play evolve strategically over time. We focus on arguably the most archetypal game-theoretic setting -- zero-sum games (as well as network generalizations) -- and the most studied evolutionary learning dynamic -- replicator, the continuous-time analogue of multiplicative weights. Populations of agents compete against each other in a zero-sum competition that itself evolves adversarially to the current population mixture. Remarkably, despite the chaotic coevolution of agents and games, we prove that the system exhibits a number of regularities. First, the system has conservation laws of an information-theoretic flavor that couple the behavior of all agents and games. Secondly, the system is Poincar\'{e} recurrent, with effectively all possible initializations of agents and games lying on recurrent orbits that come arbitrarily close to their initial conditions infinitely often. Thirdly, the time-average agent behavior and utility converge to the Nash equilibrium values of the time-average game. Finally, we provide a polynomial time algorithm to efficiently predict this time-average behavior for any such coevolving network game.Comment: To appear in AAAI 202
    corecore