3,443 research outputs found
An Artificial Synaptic Plasticity Mechanism for Classical Conditioning with Neural Networks
We present an artificial synaptic plasticity (ASP) mechanism that allows artificial systems to make associations between environmental stimuli and learn new skills at runtime. ASP builds on the classical neural network for simulating associative learning, which is induced through a conditioning-like procedure. Experiments in a simulated mobile robot demonstrate that ASP has successfully generated conditioned responses. The robot has learned during environmental exploration to use sensors added after training, improving its object-avoidance capabilities
Correlation-based model of artificially induced plasticity in motor cortex by a bidirectional brain-computer interface
Experiments show that spike-triggered stimulation performed with
Bidirectional Brain-Computer-Interfaces (BBCI) can artificially strengthen
connections between separate neural sites in motor cortex (MC). What are the
neuronal mechanisms responsible for these changes and how does targeted
stimulation by a BBCI shape population-level synaptic connectivity? The present
work describes a recurrent neural network model with probabilistic spiking
mechanisms and plastic synapses capable of capturing both neural and synaptic
activity statistics relevant to BBCI conditioning protocols. When spikes from a
neuron recorded at one MC site trigger stimuli at a second target site after a
fixed delay, the connections between sites are strengthened for spike-stimulus
delays consistent with experimentally derived spike time dependent plasticity
(STDP) rules. However, the relationship between STDP mechanisms at the level of
networks, and their modification with neural implants remains poorly
understood. Using our model, we successfully reproduces key experimental
results and use analytical derivations, along with novel experimental data. We
then derive optimal operational regimes for BBCIs, and formulate predictions
concerning the efficacy of spike-triggered stimulation in different regimes of
cortical activity.Comment: 35 pages, 9 figure
Born to learn: The inspiration, progress, and future of evolved plastic artificial neural networks
Biological plastic neural networks are systems of extraordinary computational
capabilities shaped by evolution, development, and lifetime learning. The
interplay of these elements leads to the emergence of adaptive behavior and
intelligence. Inspired by such intricate natural phenomena, Evolved Plastic
Artificial Neural Networks (EPANNs) use simulated evolution in-silico to breed
plastic neural networks with a large variety of dynamics, architectures, and
plasticity rules: these artificial systems are composed of inputs, outputs, and
plastic components that change in response to experiences in an environment.
These systems may autonomously discover novel adaptive algorithms, and lead to
hypotheses on the emergence of biological adaptation. EPANNs have seen
considerable progress over the last two decades. Current scientific and
technological advances in artificial neural networks are now setting the
conditions for radically new approaches and results. In particular, the
limitations of hand-designed networks could be overcome by more flexible and
innovative solutions. This paper brings together a variety of inspiring ideas
that define the field of EPANNs. The main methods and results are reviewed.
Finally, new opportunities and developments are presented
Fast and robust learning by reinforcement signals: explorations in the insect brain
We propose a model for pattern recognition in the insect brain. Departing from a well-known body of knowledge about the insect brain, we investigate which of the potentially present features may be useful to learn input patterns rapidly and in a stable manner. The plasticity underlying pattern recognition is situated in the insect mushroom bodies and requires an error signal to associate the stimulus with a proper response. As a proof of concept, we used our model insect brain to classify the well-known MNIST database of handwritten digits, a popular benchmark for classifiers. We show that the structural organization of the insect brain appears to be suitable for both fast learning of new stimuli and reasonable performance in stationary conditions. Furthermore, it is extremely robust to damage to the brain structures involved in sensory processing. Finally, we suggest that spatiotemporal dynamics can improve the level of confidence in a classification decision. The proposed approach allows testing the effect of hypothesized mechanisms rather than speculating on their benefit for system performance or confidence in its responses
- …