974 research outputs found
Data-efficient Neuroevolution with Kernel-Based Surrogate Models
Surrogate-assistance approaches have long been used in computationally
expensive domains to improve the data-efficiency of optimization algorithms.
Neuroevolution, however, has so far resisted the application of these
techniques because it requires the surrogate model to make fitness predictions
based on variable topologies, instead of a vector of parameters. Our main
insight is that we can sidestep this problem by using kernel-based surrogate
models, which require only the definition of a distance measure between
individuals. Our second insight is that the well-established Neuroevolution of
Augmenting Topologies (NEAT) algorithm provides a computationally efficient
distance measure between dissimilar networks in the form of "compatibility
distance", initially designed to maintain topological diversity. Combining
these two ideas, we introduce a surrogate-assisted neuroevolution algorithm
that combines NEAT and a surrogate model built using a compatibility distance
kernel. We demonstrate the data-efficiency of this new algorithm on the low
dimensional cart-pole swing-up problem, as well as the higher dimensional
half-cheetah running task. In both tasks the surrogate-assisted variant
achieves the same or better results with several times fewer function
evaluations as the original NEAT.Comment: In GECCO 201
Gene Regulatory Network Evolution Through Augmenting Topologies
International audienceArtificial gene regulatory networks (GRNs) are biologically inspired dynamical systems used to control various kinds of agents, from the cells in developmental models to embodied robot swarms. Most recent work uses a genetic algorithm (GA) or an evolution strategy in order to optimize the network for a specific task. However, the empirical performances of these algorithms are unsatisfactory. This paper presents an algorithm that primarily exploits a network distance metric, which allows genetic similarity to be used for speciation and variation of GRNs. This algorithm, inspired by the successful neuroevolution of augmenting topologies algorithm's use in evolving neural networks and compositional pattern-producing networks, is based on a specific initialization method, a crossover operator based on gene alignment, and speciation based upon GRN structures. We demonstrate the effectiveness of this new algorithm by comparing our approach both to a standard GA and to evolutionary programming on four different experiments from three distinct problem domains, where the proposed algorithm excels on all experiments
Optimizing Convolutional Neural Networks for Embedded Systems by Means of Neuroevolution
Automated design methods for convolutional neural networks (CNNs) have
recently been developed in order to increase the design productivity. We
propose a neuroevolution method capable of evolving and optimizing CNNs with
respect to the classification error and CNN complexity (expressed as the number
of tunable CNN parameters), in which the inference phase can partly be executed
using fixed point operations to further reduce power consumption. Experimental
results are obtained with TinyDNN framework and presented using two common
image classification benchmark problems -- MNIST and CIFAR-10.Comment: TPNC 2019, LNCS 11934, pp. 1-13, 201
Recommended from our members
Discovering gated recurrent neural network architectures
Reinforcement Learning agent networks with memory are a key component in solving POMDP tasks.
Gated recurrent networks such as those composed of Long Short-Term
Memory (LSTM) nodes have recently been used to improve
state of the art in many supervised sequential processing tasks such as speech
recognition and machine translation. However, scaling them to deep
memory tasks in reinforcement learning domain is challenging because of sparse and deceptive
reward function. To address this challenge first, a new secondary optimization objective is introduced
that maximizes the information (Info-max) stored in
the LSTM network. Results indicate that when combined with neuroevolution, Info-max can discover powerful
LSTM-based memory solutions that outperform traditional
RNNs. Next, for the supervised learning tasks, neuroevolution techniques are employed
to design new LSTM architectures. Such architectural variations include
discovering new pathways between the recurrent layers as well as designing new gated
recurrent nodes. This dissertation proposes evolution of a tree-based
encoding of the gated memory nodes, and shows that it makes
it possible to explore new variations more effectively than other
methods. The method discovers nodes with multiple recurrent paths
and multiple memory cells, which lead to significant improvement
in the standard language modeling benchmark task. The dissertation also
shows how the search process can be speeded up by training an
LSTM network to estimate performance of candidate structures, and
by encouraging exploration of novel solutions. Thus, evolutionary
design of complex neural network structures promises to improve
performance of deep learning architectures beyond human ability
to do so.Computer Science
Coevolution of Generative Adversarial Networks
Generative adversarial networks (GAN) became a hot topic, presenting
impressive results in the field of computer vision. However, there are still
open problems with the GAN model, such as the training stability and the
hand-design of architectures. Neuroevolution is a technique that can be used to
provide the automatic design of network architectures even in large search
spaces as in deep neural networks. Therefore, this project proposes COEGAN, a
model that combines neuroevolution and coevolution in the coordination of the
GAN training algorithm. The proposal uses the adversarial characteristic
between the generator and discriminator components to design an algorithm using
coevolution techniques. Our proposal was evaluated in the MNIST dataset. The
results suggest the improvement of the training stability and the automatic
discovery of efficient network architectures for GANs. Our model also partially
solves the mode collapse problem.Comment: Published in EvoApplications 201
- …