26,568 research outputs found
Memory and information processing in neuromorphic systems
A striking difference between brain-inspired neuromorphic processors and
current von Neumann processors architectures is the way in which memory and
processing is organized. As Information and Communication Technologies continue
to address the need for increased computational power through the increase of
cores within a digital processor, neuromorphic engineers and scientists can
complement this need by building processor architectures where memory is
distributed with the processing. In this paper we present a survey of
brain-inspired processor architectures that support models of cortical networks
and deep neural networks. These architectures range from serial clocked
implementations of multi-neuron systems to massively parallel asynchronous ones
and from purely digital systems to mixed analog/digital systems which implement
more biological-like models of neurons and synapses together with a suite of
adaptation and learning mechanisms analogous to the ones found in biological
nervous systems. We describe the advantages of the different approaches being
pursued and present the challenges that need to be addressed for building
artificial neural processing systems that can display the richness of behaviors
seen in biological systems.Comment: Submitted to Proceedings of IEEE, review of recently proposed
neuromorphic computing platforms and system
Real time unsupervised learning of visual stimuli in neuromorphic VLSI systems
Neuromorphic chips embody computational principles operating in the nervous
system, into microelectronic devices. In this domain it is important to
identify computational primitives that theory and experiments suggest as
generic and reusable cognitive elements. One such element is provided by
attractor dynamics in recurrent networks. Point attractors are equilibrium
states of the dynamics (up to fluctuations), determined by the synaptic
structure of the network; a `basin' of attraction comprises all initial states
leading to a given attractor upon relaxation, hence making attractor dynamics
suitable to implement robust associative memory. The initial network state is
dictated by the stimulus, and relaxation to the attractor state implements the
retrieval of the corresponding memorized prototypical pattern. In a previous
work we demonstrated that a neuromorphic recurrent network of spiking neurons
and suitably chosen, fixed synapses supports attractor dynamics. Here we focus
on learning: activating on-chip synaptic plasticity and using a theory-driven
strategy for choosing network parameters, we show that autonomous learning,
following repeated presentation of simple visual stimuli, shapes a synaptic
connectivity supporting stimulus-selective attractors. Associative memory
develops on chip as the result of the coupled stimulus-driven neural activity
and ensuing synaptic dynamics, with no artificial separation between learning
and retrieval phases.Comment: submitted to Scientific Repor
Experimental demonstration of associative memory with memristive neural networks
When someone mentions the name of a known person we immediately recall her face and possibly many other traits. This is because we possess the so-called associative memory - the ability to correlate different memories to the same fact or event. Associative memory is such a fundamental and encompassing human ability (and not just human) that the network of neurons in our brain must perform it quite easily. The question is then whether electronic neural networks - electronic schemes that act somewhat similarly to human brains - can be built to perform this type of function. Although the field of neural networks has developed for many years, a key element, namely the synapses between adjacent neurons, has been lacking a satisfactory electronic representation. The reason for this is that a passive circuit element able to reproduce the synapse behaviour needs to remember its past dynamical history, store a continuous set of states, and be "plastic" according to the pre-synaptic and post-synaptic neuronal activity. Here we show that all this can be accomplished by a memory-resistor (memristor for short). In particular, by using simple and inexpensive off-the-shelf components we have built a memristor emulator which realizes all required synaptic properties. Most importantly, we have demonstrated experimentally the formation of associative memory in a simple neural network consisting of three electronic neurons connected by two memristor-emulator synapses. This experimental demonstration opens up new possibilities in the understanding of neural processes using memory devices, an important step forward to reproduce complex learning, adaptive and spontaneous behaviour with electronic neural networks
Learning as a phenomenon occurring in a critical state
Recent physiological measurements have provided clear evidence about
scale-free avalanche brain activity and EEG spectra, feeding the classical
enigma of how such a chaotic system can ever learn or respond in a controlled
and reproducible way. Models for learning, like neural networks or perceptrons,
have traditionally avoided strong fluctuations. Conversely, we propose that
brain activity having features typical of systems at a critical point,
represents a crucial ingredient for learning. We present here a study which
provides novel insights toward the understanding of the problem. Our model is
able to reproduce quantitatively the experimentally observed critical state of
the brain and, at the same time, learns and remembers logical rules including
the exclusive OR (XOR), which has posed difficulties to several previous
attempts. We implement the model on a network with topological properties close
to the functionality network in real brains. Learning occurs via plastic
adaptation of synaptic strengths and exhibits universal features. We find that
the learning performance and the average time required to learn are controlled
by the strength of plastic adaptation, in a way independent of the specific
task assigned to the system. Even complex rules can be learned provided that
the plastic adaptation is sufficiently slow.Comment: 5 pages, 5 figure
Born to learn: The inspiration, progress, and future of evolved plastic artificial neural networks
Biological plastic neural networks are systems of extraordinary computational
capabilities shaped by evolution, development, and lifetime learning. The
interplay of these elements leads to the emergence of adaptive behavior and
intelligence. Inspired by such intricate natural phenomena, Evolved Plastic
Artificial Neural Networks (EPANNs) use simulated evolution in-silico to breed
plastic neural networks with a large variety of dynamics, architectures, and
plasticity rules: these artificial systems are composed of inputs, outputs, and
plastic components that change in response to experiences in an environment.
These systems may autonomously discover novel adaptive algorithms, and lead to
hypotheses on the emergence of biological adaptation. EPANNs have seen
considerable progress over the last two decades. Current scientific and
technological advances in artificial neural networks are now setting the
conditions for radically new approaches and results. In particular, the
limitations of hand-designed networks could be overcome by more flexible and
innovative solutions. This paper brings together a variety of inspiring ideas
that define the field of EPANNs. The main methods and results are reviewed.
Finally, new opportunities and developments are presented
The chronotron: a neuron that learns to fire temporally-precise spike patterns
In many cases, neurons process information carried by the precise timing of spikes. Here we show how neurons can learn to generate specific temporally-precise output spikes in response to input spike patterns, thus processing and memorizing information that is fully temporally coded, both as input and as output. We introduce two new supervised learning rules for spiking neurons with temporal coding of information (chronotrons), one that is analytically-derived and highly efficient, and one that has a high degree of biological plausibility. We show how chronotrons can learn to classify their inputs and we study their memory capacity
- âŚ