6,595 research outputs found
Spike-timing dependent plasticity and the cognitive map
Since the discovery of place cells – single pyramidal neurons that encode spatial location – it has been hypothesized that the hippocampus may act as a cognitive map of known environments. This putative function has been extensively modeled using auto-associative networks, which utilize rate-coded synaptic plasticity rules in order to generate strong bi-directional connections between concurrently active place cells that encode for neighboring place fields. However, empirical studies using hippocampal cultures have demonstrated that the magnitude and direction of changes in synaptic strength can also be dictated by the relative timing of pre- and post-synaptic firing according to a spike-timing dependent plasticity (STDP) rule. Furthermore, electrophysiology studies have identified persistent “theta-coded” temporal correlations in place cell activity in vivo, characterized by phase precession of firing as the corresponding place field is traversed. It is not yet clear if STDP and theta-coded neural dynamics are compatible with cognitive map theory and previous rate-coded models of spatial learning in the hippocampus. Here, we demonstrate that an STDP rule based on empirical data obtained from the hippocampus can mediate rate-coded Hebbian learning when pre- and post-synaptic activity is stochastic and has no persistent sequence bias. We subsequently demonstrate that a spiking recurrent neural network that utilizes this STDP rule, alongside theta-coded neural activity, allows the rapid development of a cognitive map during directed or random exploration of an environment of overlapping place fields. Hence, we establish that STDP and phase precession are compatible with rate-coded models of cognitive map development
A neuro-inspired system for online learning and recognition of parallel spike trains, based on spike latency and heterosynaptic STDP
Humans perform remarkably well in many cognitive tasks including pattern
recognition. However, the neuronal mechanisms underlying this process are not
well understood. Nevertheless, artificial neural networks, inspired in brain
circuits, have been designed and used to tackle spatio-temporal pattern
recognition tasks. In this paper we present a multineuronal spike pattern
detection structure able to autonomously implement online learning and
recognition of parallel spike sequences (i.e., sequences of pulses belonging to
different neurons/neural ensembles). The operating principle of this structure
is based on two spiking/synaptic neurocomputational characteristics: spike
latency, that enables neurons to fire spikes with a certain delay and
heterosynaptic plasticity, that allows the own regulation of synaptic weights.
From the perspective of the information representation, the structure allows
mapping a spatio-temporal stimulus into a multidimensional, temporal, feature
space. In this space, the parameter coordinate and the time at which a neuron
fires represent one specific feature. In this sense, each feature can be
considered to span a single temporal axis. We applied our proposed scheme to
experimental data obtained from a motor inhibitory cognitive task. The test
exhibits good classification performance, indicating the adequateness of our
approach. In addition to its effectiveness, its simplicity and low
computational cost suggest a large scale implementation for real time
recognition applications in several areas, such as brain computer interface,
personal biometrics authentication or early detection of diseases.Comment: Submitted to Frontiers in Neuroscienc
Memory and information processing in neuromorphic systems
A striking difference between brain-inspired neuromorphic processors and
current von Neumann processors architectures is the way in which memory and
processing is organized. As Information and Communication Technologies continue
to address the need for increased computational power through the increase of
cores within a digital processor, neuromorphic engineers and scientists can
complement this need by building processor architectures where memory is
distributed with the processing. In this paper we present a survey of
brain-inspired processor architectures that support models of cortical networks
and deep neural networks. These architectures range from serial clocked
implementations of multi-neuron systems to massively parallel asynchronous ones
and from purely digital systems to mixed analog/digital systems which implement
more biological-like models of neurons and synapses together with a suite of
adaptation and learning mechanisms analogous to the ones found in biological
nervous systems. We describe the advantages of the different approaches being
pursued and present the challenges that need to be addressed for building
artificial neural processing systems that can display the richness of behaviors
seen in biological systems.Comment: Submitted to Proceedings of IEEE, review of recently proposed
neuromorphic computing platforms and system
Attractor networks and memory replay of phase coded spike patterns
We analyse the storage and retrieval capacity in a recurrent neural network
of spiking integrate and fire neurons. In the model we distinguish between a
learning mode, during which the synaptic connections change according to a
Spike-Timing Dependent Plasticity (STDP) rule, and a recall mode, in which
connections strengths are no more plastic. Our findings show the ability of the
network to store and recall periodic phase coded patterns a small number of
neurons has been stimulated. The self sustained dynamics selectively gives an
oscillating spiking activity that matches one of the stored patterns, depending
on the initialization of the network.Comment: arXiv admin note: text overlap with arXiv:1210.678
Associative memory of phase-coded spatiotemporal patterns in leaky Integrate and Fire networks
We study the collective dynamics of a Leaky Integrate and Fire network in
which precise relative phase relationship of spikes among neurons are stored,
as attractors of the dynamics, and selectively replayed at differentctime
scales. Using an STDP-based learning process, we store in the connectivity
several phase-coded spike patterns, and we find that, depending on the
excitability of the network, different working regimes are possible, with
transient or persistent replay activity induced by a brief signal. We introduce
an order parameter to evaluate the similarity between stored and recalled
phase-coded pattern, and measure the storage capacity. Modulation of spiking
thresholds during replay changes the frequency of the collective oscillation or
the number of spikes per cycle, keeping preserved the phases relationship. This
allows a coding scheme in which phase, rate and frequency are dissociable.
Robustness with respect to noise and heterogeneity of neurons parameters is
studied, showing that, since dynamics is a retrieval process, neurons preserve
stablecprecise phase relationship among units, keeping a unique frequency of
oscillation, even in noisy conditions and with heterogeneity of internal
parameters of the units
Early Turn-taking Prediction with Spiking Neural Networks for Human Robot Collaboration
Turn-taking is essential to the structure of human teamwork. Humans are
typically aware of team members' intention to keep or relinquish their turn
before a turn switch, where the responsibility of working on a shared task is
shifted. Future co-robots are also expected to provide such competence. To that
end, this paper proposes the Cognitive Turn-taking Model (CTTM), which
leverages cognitive models (i.e., Spiking Neural Network) to achieve early
turn-taking prediction. The CTTM framework can process multimodal human
communication cues (both implicit and explicit) and predict human turn-taking
intentions in an early stage. The proposed framework is tested on a simulated
surgical procedure, where a robotic scrub nurse predicts the surgeon's
turn-taking intention. It was found that the proposed CTTM framework
outperforms the state-of-the-art turn-taking prediction algorithms by a large
margin. It also outperforms humans when presented with partial observations of
communication cues (i.e., less than 40% of full actions). This early prediction
capability enables robots to initiate turn-taking actions at an early stage,
which facilitates collaboration and increases overall efficiency.Comment: Submitted to IEEE International Conference on Robotics and Automation
(ICRA) 201
Real time unsupervised learning of visual stimuli in neuromorphic VLSI systems
Neuromorphic chips embody computational principles operating in the nervous
system, into microelectronic devices. In this domain it is important to
identify computational primitives that theory and experiments suggest as
generic and reusable cognitive elements. One such element is provided by
attractor dynamics in recurrent networks. Point attractors are equilibrium
states of the dynamics (up to fluctuations), determined by the synaptic
structure of the network; a `basin' of attraction comprises all initial states
leading to a given attractor upon relaxation, hence making attractor dynamics
suitable to implement robust associative memory. The initial network state is
dictated by the stimulus, and relaxation to the attractor state implements the
retrieval of the corresponding memorized prototypical pattern. In a previous
work we demonstrated that a neuromorphic recurrent network of spiking neurons
and suitably chosen, fixed synapses supports attractor dynamics. Here we focus
on learning: activating on-chip synaptic plasticity and using a theory-driven
strategy for choosing network parameters, we show that autonomous learning,
following repeated presentation of simple visual stimuli, shapes a synaptic
connectivity supporting stimulus-selective attractors. Associative memory
develops on chip as the result of the coupled stimulus-driven neural activity
and ensuing synaptic dynamics, with no artificial separation between learning
and retrieval phases.Comment: submitted to Scientific Repor
- …