2,427 research outputs found
Born to learn: The inspiration, progress, and future of evolved plastic artificial neural networks
Biological plastic neural networks are systems of extraordinary computational
capabilities shaped by evolution, development, and lifetime learning. The
interplay of these elements leads to the emergence of adaptive behavior and
intelligence. Inspired by such intricate natural phenomena, Evolved Plastic
Artificial Neural Networks (EPANNs) use simulated evolution in-silico to breed
plastic neural networks with a large variety of dynamics, architectures, and
plasticity rules: these artificial systems are composed of inputs, outputs, and
plastic components that change in response to experiences in an environment.
These systems may autonomously discover novel adaptive algorithms, and lead to
hypotheses on the emergence of biological adaptation. EPANNs have seen
considerable progress over the last two decades. Current scientific and
technological advances in artificial neural networks are now setting the
conditions for radically new approaches and results. In particular, the
limitations of hand-designed networks could be overcome by more flexible and
innovative solutions. This paper brings together a variety of inspiring ideas
that define the field of EPANNs. The main methods and results are reviewed.
Finally, new opportunities and developments are presented
Safe Mutations for Deep and Recurrent Neural Networks through Output Gradients
While neuroevolution (evolving neural networks) has a successful track record
across a variety of domains from reinforcement learning to artificial life, it
is rarely applied to large, deep neural networks. A central reason is that
while random mutation generally works in low dimensions, a random perturbation
of thousands or millions of weights is likely to break existing functionality,
providing no learning signal even if some individual weight changes were
beneficial. This paper proposes a solution by introducing a family of safe
mutation (SM) operators that aim within the mutation operator itself to find a
degree of change that does not alter network behavior too much, but still
facilitates exploration. Importantly, these SM operators do not require any
additional interactions with the environment. The most effective SM variant
capitalizes on the intriguing opportunity to scale the degree of mutation of
each individual weight according to the sensitivity of the network's outputs to
that weight, which requires computing the gradient of outputs with respect to
the weights (instead of the gradient of error, as in conventional deep
learning). This safe mutation through gradients (SM-G) operator dramatically
increases the ability of a simple genetic algorithm-based neuroevolution method
to find solutions in high-dimensional domains that require deep and/or
recurrent neural networks (which tend to be particularly brittle to mutation),
including domains that require processing raw pixels. By improving our ability
to evolve deep neural networks, this new safer approach to mutation expands the
scope of domains amenable to neuroevolution
Biological learning and artificial intelligence
It was once taken for granted that learning in animals and man could be explained with a simple set of general learning rules, but over the last hundred years, a substantial amount of evidence has been accumulated that points in a quite different direction. In animal learning theory, the laws of learning are no longer considered general. Instead, it has been necessary to explain behaviour in terms of a large set of interacting learning mechanisms and innate behaviours. Artificial intelligence is now on the edge of making the transition from general theories to a view of intelligence that is based on anamalgamate of interacting systems. In the light of the evidence from animal learning theory, such a transition is to be highly desired
Recommended from our members
Discovering gated recurrent neural network architectures
Reinforcement Learning agent networks with memory are a key component in solving POMDP tasks.
Gated recurrent networks such as those composed of Long Short-Term
Memory (LSTM) nodes have recently been used to improve
state of the art in many supervised sequential processing tasks such as speech
recognition and machine translation. However, scaling them to deep
memory tasks in reinforcement learning domain is challenging because of sparse and deceptive
reward function. To address this challenge first, a new secondary optimization objective is introduced
that maximizes the information (Info-max) stored in
the LSTM network. Results indicate that when combined with neuroevolution, Info-max can discover powerful
LSTM-based memory solutions that outperform traditional
RNNs. Next, for the supervised learning tasks, neuroevolution techniques are employed
to design new LSTM architectures. Such architectural variations include
discovering new pathways between the recurrent layers as well as designing new gated
recurrent nodes. This dissertation proposes evolution of a tree-based
encoding of the gated memory nodes, and shows that it makes
it possible to explore new variations more effectively than other
methods. The method discovers nodes with multiple recurrent paths
and multiple memory cells, which lead to significant improvement
in the standard language modeling benchmark task. The dissertation also
shows how the search process can be speeded up by training an
LSTM network to estimate performance of candidate structures, and
by encouraging exploration of novel solutions. Thus, evolutionary
design of complex neural network structures promises to improve
performance of deep learning architectures beyond human ability
to do so.Computer Science
- …