41 research outputs found
Neuroevolution in Deep Neural Networks: Current Trends and Future Challenges
A variety of methods have been applied to the architectural configuration and
learning or training of artificial deep neural networks (DNN). These methods
play a crucial role in the success or failure of the DNN for most problems and
applications. Evolutionary Algorithms (EAs) are gaining momentum as a
computationally feasible method for the automated optimisation and training of
DNNs. Neuroevolution is a term which describes these processes of automated
configuration and training of DNNs using EAs. While many works exist in the
literature, no comprehensive surveys currently exist focusing exclusively on
the strengths and limitations of using neuroevolution approaches in DNNs.
Prolonged absence of such surveys can lead to a disjointed and fragmented field
preventing DNNs researchers potentially adopting neuroevolutionary methods in
their own research, resulting in lost opportunities for improving performance
and wider application within real-world deep learning problems. This paper
presents a comprehensive survey, discussion and evaluation of the
state-of-the-art works on using EAs for architectural configuration and
training of DNNs. Based on this survey, the paper highlights the most pertinent
current issues and challenges in neuroevolution and identifies multiple
promising future research directions.Comment: 20 pages (double column), 2 figures, 3 tables, 157 reference
Evolutionary Design of Convolutional Neural Networks for Human Activity Recognition in Sensor-Rich Environments
Human activity recognition is a challenging problem for context-aware systems and applications. It is gaining interest due to the ubiquity of different sensor sources, wearable smart objects, ambient sensors, etc. This task is usually approached as a supervised machine learning problem, where a label is to be predicted given some input data, such as the signals retrieved from different sensors. For tackling the human activity recognition problem in sensor network environments, in this paper we propose the use of deep learning (convolutional neural networks) to perform activity recognition using the publicly available OPPORTUNITY dataset. Instead of manually choosing a suitable topology, we will let an evolutionary algorithm design the optimal topology in order to maximize the classification F1 score. After that, we will also explore the performance of committees of the models resulting from the evolutionary process. Results analysis indicates that the proposed model was able to perform activity recognition within a heterogeneous sensor network environment, achieving very high accuracies when tested with new sensor data. Based on all conducted experiments, the proposed neuroevolutionary system has proved to be able to systematically find a classification model which is capable of outperforming previous results reported in the state-of-the-art, showing that this approach is useful and improves upon previously manually-designed architectures.This research is partially supported by the Spanish Ministry of Education, Culture and Sports under FPU fellowship with identifier FPU13/03917
Safe Mutations for Deep and Recurrent Neural Networks through Output Gradients
While neuroevolution (evolving neural networks) has a successful track record
across a variety of domains from reinforcement learning to artificial life, it
is rarely applied to large, deep neural networks. A central reason is that
while random mutation generally works in low dimensions, a random perturbation
of thousands or millions of weights is likely to break existing functionality,
providing no learning signal even if some individual weight changes were
beneficial. This paper proposes a solution by introducing a family of safe
mutation (SM) operators that aim within the mutation operator itself to find a
degree of change that does not alter network behavior too much, but still
facilitates exploration. Importantly, these SM operators do not require any
additional interactions with the environment. The most effective SM variant
capitalizes on the intriguing opportunity to scale the degree of mutation of
each individual weight according to the sensitivity of the network's outputs to
that weight, which requires computing the gradient of outputs with respect to
the weights (instead of the gradient of error, as in conventional deep
learning). This safe mutation through gradients (SM-G) operator dramatically
increases the ability of a simple genetic algorithm-based neuroevolution method
to find solutions in high-dimensional domains that require deep and/or
recurrent neural networks (which tend to be particularly brittle to mutation),
including domains that require processing raw pixels. By improving our ability
to evolve deep neural networks, this new safer approach to mutation expands the
scope of domains amenable to neuroevolution
Evolving developmental, recurrent and convolutional neural networks for deliberate motion planning in sparse reward tasks
Motion planning algorithms have seen a diverse set of approaches in a variety of disciplines. In the domain of artificial evolutionary systems, motion planning has been included in models to achieve sophisticated deliberate behaviours. These algorithms rely on fixed rules or little evolutionary influence which compels behaviours to conform within those specific policies, rather than allowing the model to establish its own specialised behaviour. In order to further these models, the constraints imposed by planning algorithms must be removed to grant greater evolutionary control over behaviours. That is the focus of this thesis.
An examination of prevailing neuroevolution methods led to the use of two distinct approaches, NEAT and HyperNEAT. Both were used to gain an understanding of the components necessary to create neuroevolution planning. The findings accumulated in the formation of a novel convolutional neural network architecture with a recurrent convolution process. The architecture’s goal was to iteratively disperse local activations to greater regions of the feature space. Experimentation showed significantly improved robustness over contemporary neuroevolution techniques as well as an efficiency increase over a static rule set. Greater evolutionary responsibility is given to the model with multiple network combinations; all of which continually demonstrated the necessary behaviours. In comparison, these behaviours were shown to be difficult to achieve in a state-of-the-art deep convolutional network.
Finally, the unique use of recurrent convolution is relocated to a larger convolutional architecture on an established benchmarking platform. Performance improvements are seen on a number of domains which illustrates that this recurrent mechanism can be exploited in alternative areas outside of planning. By presenting a viable neuroevolution method for motion planning a potential emerges for further systems to adopt and examine the capability of this work in prospective domains, as well as further avenues of experimentation in convolutional architectures
Evolutionary design of deep neural networks
MenciĂłn Internacional en el tĂtulo de doctorFor three decades, neuroevolution has applied evolutionary computation to the optimization of
the topology of artificial neural networks, with most works focusing on very simple architectures.
However, times have changed, and nowadays convolutional neural networks are the industry and
academia standard for solving a variety of problems, many of which remained unsolved before the
discovery of this kind of networks.
Convolutional neural networks involve complex topologies, and the manual design of these
topologies for solving a problem at hand is expensive and inefficient. In this thesis, our aim is to
use neuroevolution in order to evolve the architecture of convolutional neural networks.
To do so, we have decided to try two different techniques: genetic algorithms and grammatical
evolution. We have implemented a niching scheme for preserving the genetic diversity, in order
to ease the construction of ensembles of neural networks. These techniques have been validated
against the MNIST database for handwritten digit recognition, achieving a test error rate of 0.28%,
and the OPPORTUNITY data set for human activity recognition, attaining an F1 score of 0.9275.
Both results have proven very competitive when compared with the state of the art. Also, in all
cases, ensembles have proven to perform better than individual models.
Later, the topologies learned for MNIST were tested on EMNIST, a database recently introduced
in 2017, which includes more samples and a set of letters for character recognition. Results have
shown that the topologies optimized for MNIST perform well on EMNIST, proving that architectures
can be reused across domains with similar characteristics.
In summary, neuroevolution is an effective approach for automatically designing topologies for
convolutional neural networks. However, it still remains as an unexplored field due to hardware
limitations. Current advances, however, should constitute the fuel that empowers the emergence of
this field, and further research should start as of today.This Ph.D. dissertation has been partially supported by the Spanish Ministry of Education, Culture and Sports under FPU fellowship with identifier FPU13/03917.
This research stay has been partially co-funded by the Spanish Ministry of Education, Culture and Sports under FPU short stay grant with identifier EST15/00260.Programa Oficial de Doctorado en Ciencia y TecnologĂa InformáticaPresidente: MarĂa Araceli SanchĂs de Miguel.- Secretario: Francisco Javier Segovia PĂ©rez.- Vocal: Simon Luca
Neuroevolution trajectory networks : illuminating the evolution of artificial neural networks
Neuroevolution is the discipline whereby ANNs are automatically generated using EC. This field began with the evolution of dense (shallow) neural networks for reinforcement learning task; neurocontrollers capable of evolving specific behaviours as required.
Since then, neuroevolution has been used to discover architectures and hyperparameters of Deep Neural Networks, in ways never before conceived by human experts, with many achieving state-of-the-art results. Similar to other types of EAs, there is a wide variety of neuroevolution algorithms constantly being introduced. However, there is a lack of effective tools to examine these systems and assess whether they share underlying principles.
This thesis proposes Neuroevolution Trajectory Networks (NTNs), an advanced visualisation tool that leverages complex networks to explore the intrinsic mechanisms inherent in the evolution of neural networks. In this research the tool was developed as a specialised version of Search Trajectory Networks, and it was particularly instantiated to illuminate the behaviour of algorithms navigating neuroevolution search spaces.
Throughout the progress, this technique has been progressively applied from systems of shallow network evolution, to deep neural networks. The examination has focused on explicit characteristics of neuroevolution system. Specifically, the learnings achieved highlighted the importance of understanding the role of recombination in neuroevolution, revealing critical inefficiencies that hinder overall algorithm performance. A relation between neurocontrollers' diversity and exploration exists, as topological structures can influence the behavioural characterisations and the diversity generation of different search strategies. Furthermore, our analytical tool has offered insights into the favoured dynamics of transfer learning paradigm in the deep neuroevolution of Convolutional Neural Networks; shedding light on promising avenues for further research and development.
All of the above have offered substantial evidence that this advanced tool can be regarded as a specialised observational technique to better understand the inner mechanics of neuroevolution and its specific components, beyond the assessment of accuracy and performance alone. This is done so that collective efforts can be concentrated on aspects that can further enhance the evolution of neural networks.
Illuminating their search spaces can be seen as a first step to analysing neural network compositions
Born to learn: The inspiration, progress, and future of evolved plastic artificial neural networks
Biological plastic neural networks are systems of extraordinary computational
capabilities shaped by evolution, development, and lifetime learning. The
interplay of these elements leads to the emergence of adaptive behavior and
intelligence. Inspired by such intricate natural phenomena, Evolved Plastic
Artificial Neural Networks (EPANNs) use simulated evolution in-silico to breed
plastic neural networks with a large variety of dynamics, architectures, and
plasticity rules: these artificial systems are composed of inputs, outputs, and
plastic components that change in response to experiences in an environment.
These systems may autonomously discover novel adaptive algorithms, and lead to
hypotheses on the emergence of biological adaptation. EPANNs have seen
considerable progress over the last two decades. Current scientific and
technological advances in artificial neural networks are now setting the
conditions for radically new approaches and results. In particular, the
limitations of hand-designed networks could be overcome by more flexible and
innovative solutions. This paper brings together a variety of inspiring ideas
that define the field of EPANNs. The main methods and results are reviewed.
Finally, new opportunities and developments are presented
Algebraic Neural Architecture Representation, Evolutionary Neural Architecture Search, and Novelty Search in Deep Reinforcement Learning
Evolutionary algorithms have recently re-emerged as powerful tools for machine learning and artificial intelligence, especially when combined with advances in deep learning developed over the last decade. In contrast to the use of fixed architectures and rigid learning algorithms, we leveraged the open-endedness of evolutionary algorithms to make both theoretical and methodological contributions to deep reinforcement learning. This thesis explores and develops two major areas at the intersection of evolutionary algorithms and deep reinforcement learning: generative network architectures and behaviour-based optimization. Over three distinct contributions, both theoretical and experimental methods were applied to deliver a novel mathematical framework and experimental method for generative, modular neural network architecture search for reinforcement learning, and a generalized formulation of a behaviour- based optimization framework for reinforcement learning called novelty search. Experimental results indicate that both alternative, behaviour-based optimization and neural architecture search can each be used to improve learning in the popular Atari 2600 benchmark compared to DQN — a popular gradient-based method. These results are in-line with related work demonstrating that strictly gradient-free methods are competitive with gradient-based reinforcement learning. These contributions, together with other successful combinations of evolutionary algorithms and deep learning, demonstrate that alternative architectures and learning algorithms to those conventionally used in deep learning should be seriously investigated in an effort to drive progress in artificial intelligence