1,396 research outputs found
Diffusion-based neuromodulation can eliminate catastrophic forgetting in simple neural networks
A long-term goal of AI is to produce agents that can learn a diversity of
skills throughout their lifetimes and continuously improve those skills via
experience. A longstanding obstacle towards that goal is catastrophic
forgetting, which is when learning new information erases previously learned
information. Catastrophic forgetting occurs in artificial neural networks
(ANNs), which have fueled most recent advances in AI. A recent paper proposed
that catastrophic forgetting in ANNs can be reduced by promoting modularity,
which can limit forgetting by isolating task information to specific clusters
of nodes and connections (functional modules). While the prior work did show
that modular ANNs suffered less from catastrophic forgetting, it was not able
to produce ANNs that possessed task-specific functional modules, thereby
leaving the main theory regarding modularity and forgetting untested. We
introduce diffusion-based neuromodulation, which simulates the release of
diffusing, neuromodulatory chemicals within an ANN that can modulate (i.e. up
or down regulate) learning in a spatial region. On the simple diagnostic
problem from the prior work, diffusion-based neuromodulation 1) induces
task-specific learning in groups of nodes and connections (task-specific
localized learning), which 2) produces functional modules for each subtask, and
3) yields higher performance by eliminating catastrophic forgetting. Overall,
our results suggest that diffusion-based neuromodulation promotes task-specific
localized learning and functional modularity, which can help solve the
challenging, but important problem of catastrophic forgetting
Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images
Deep neural networks (DNNs) have recently been achieving state-of-the-art
performance on a variety of pattern-recognition tasks, most notably visual
classification problems. Given that DNNs are now able to classify objects in
images with near-human-level performance, questions naturally arise as to what
differences remain between computer and human vision. A recent study revealed
that changing an image (e.g. of a lion) in a way imperceptible to humans can
cause a DNN to label the image as something else entirely (e.g. mislabeling a
lion a library). Here we show a related result: it is easy to produce images
that are completely unrecognizable to humans, but that state-of-the-art DNNs
believe to be recognizable objects with 99.99% confidence (e.g. labeling with
certainty that white noise static is a lion). Specifically, we take
convolutional neural networks trained to perform well on either the ImageNet or
MNIST datasets and then find images with evolutionary algorithms or gradient
ascent that DNNs label with high confidence as belonging to each dataset class.
It is possible to produce images totally unrecognizable to human eyes that DNNs
believe with near certainty are familiar objects, which we call "fooling
images" (more generally, fooling examples). Our results shed light on
interesting differences between human vision and current DNNs, and raise
questions about the generality of DNN computer vision.Comment: To appear at CVPR 201
The Emergence of Canalization and Evolvability in an Open-Ended, Interactive Evolutionary System
Natural evolution has produced a tremendous diversity of functional
organisms. Many believe an essential component of this process was the
evolution of evolvability, whereby evolution speeds up its ability to innovate
by generating a more adaptive pool of offspring. One hypothesized mechanism for
evolvability is developmental canalization, wherein certain dimensions of
variation become more likely to be traversed and others are prevented from
being explored (e.g. offspring tend to have similarly sized legs, and mutations
affect the length of both legs, not each leg individually). While ubiquitous in
nature, canalization almost never evolves in computational simulations of
evolution. Not only does that deprive us of in silico models in which to study
the evolution of evolvability, but it also raises the question of which
conditions give rise to this form of evolvability. Answering this question
would shed light on why such evolvability emerged naturally and could
accelerate engineering efforts to harness evolution to solve important
engineering challenges. In this paper we reveal a unique system in which
canalization did emerge in computational evolution. We document that genomes
entrench certain dimensions of variation that were frequently explored during
their evolutionary history. The genetic representation of these organisms also
evolved to be highly modular and hierarchical, and we show that these
organizational properties correlate with increased fitness. Interestingly, the
type of computational evolutionary experiment that produced this evolvability
was very different from traditional digital evolution in that there was no
objective, suggesting that open-ended, divergent evolutionary processes may be
necessary for the evolution of evolvability.Comment: SI can be found at: http://www.evolvingai.org/files/SI_0.zi
- …