27 research outputs found
Spike-Timing Theory of Working Memory
Working memory (WM) is the part of the brain's memory system that provides temporary storage and manipulation of information necessary for cognition. Although WM has limited capacity at any given time, it has vast memory content in the sense that it acts on the brain's nearly infinite repertoire of lifetime long-term memories. Using simulations, we show that large memory content and WM functionality emerge spontaneously if we take the spike-timing nature of neuronal processing into account. Here, memories are represented by extensively overlapping groups of neurons that exhibit stereotypical time-locked spatiotemporal spike-timing patterns, called polychronous patterns; and synapses forming such polychronous neuronal groups (PNGs) are subject to associative synaptic plasticity in the form of both long-term and short-term spike-timing dependent plasticity. While long-term potentiation is essential in PNG formation, we show how short-term plasticity can temporarily strengthen the synapses of selected PNGs and lead to an increase in the spontaneous reactivation rate of these PNGs. This increased reactivation rate, consistent with in vivo recordings during WM tasks, results in high interspike interval variability and irregular, yet systematically changing, elevated firing rate profiles within the neurons of the selected PNGs. Additionally, our theory explains the relationship between such slowly changing firing rates and precisely timed spikes, and it reveals a novel relationship between WM and the perception of time on the order of seconds
An Efficient Method for online Detection of Polychronous Patterns in Spiking Neural Network
Polychronous neural groups are effective structures for the recognition of
precise spike-timing patterns but the detection method is an inefficient
multi-stage brute force process that works off-line on pre-recorded simulation
data. This work presents a new model of polychronous patterns that can capture
precise sequences of spikes directly in the neural simulation. In this scheme,
each neuron is assigned a randomized code that is used to tag the post-synaptic
neurons whenever a spike is transmitted. This creates a polychronous code that
preserves the order of pre-synaptic activity and can be registered in a hash
table when the post-synaptic neuron spikes. A polychronous code is a
sub-component of a polychronous group that will occur, along with others, when
the group is active. We demonstrate the representational and pattern
recognition ability of polychronous codes on a direction selective visual task
involving moving bars that is typical of a computation performed by simple
cells in the cortex. The computational efficiency of the proposed algorithm far
exceeds existing polychronous group detection methods and is well suited for
online detection.Comment: 17 pages, 8 figure
SIMPEL: Circuit model for photonic spike processing laser neurons
We propose an equivalent circuit model for photonic spike processing laser
neurons with an embedded saturable absorber---a simulation model for photonic
excitable lasers (SIMPEL). We show that by mapping the laser neuron rate
equations into a circuit model, SPICE analysis can be used as an efficient and
accurate engine for numerical calculations, capable of generalization to a
variety of different laser neuron types found in literature. The development of
this model parallels the Hodgkin--Huxley model of neuron biophysics, a circuit
framework which brought efficiency, modularity, and generalizability to the
study of neural dynamics. We employ the model to study various
signal-processing effects such as excitability with excitatory and inhibitory
pulses, binary all-or-nothing response, and bistable dynamics.Comment: 16 pages, 7 figure
Short-Term Memory Through Persistent Activity: Evolution of Self-Stopping and Self-Sustaining Activity in Spiking Neural Networks
Memories in the brain are separated in two categories: short-term and
long-term memories. Long-term memories remain for a lifetime, while short-term
ones exist from a few milliseconds to a few minutes. Within short-term memory
studies, there is debate about what neural structure could implement it.
Indeed, mechanisms responsible for long-term memories appear inadequate for the
task. Instead, it has been proposed that short-term memories could be sustained
by the persistent activity of a group of neurons. In this work, we explore what
topology could sustain short-term memories, not by designing a model from
specific hypotheses, but through Darwinian evolution in order to obtain new
insights into its implementation. We evolved 10 networks capable of retaining
information for a fixed duration between 2 and 11s. Our main finding has been
that the evolution naturally created two functional modules in the network: one
which sustains the information containing primarily excitatory neurons, while
the other, which is responsible for forgetting, was composed mainly of
inhibitory neurons. This demonstrates how the balance between inhibition and
excitation plays an important role in cognition.Comment: 28 page
Hebbian fast plasticity and working memory
Theories and models of working memory (WM) were at least since the mid-1990s
dominated by the persistent activity hypothesis. The past decade has seen
rising concerns about the shortcomings of sustained activity as the mechanism
for short-term maintenance of WM information in the light of accumulating
experimental evidence for so-called activity-silent WM and the fundamental
difficulty in explaining robust multi-item WM. In consequence, alternative
theories are now explored mostly in the direction of fast synaptic plasticity
as the underlying mechanism.The question of non-Hebbian vs Hebbian synaptic
plasticity emerges naturally in this context. In this review we focus on fast
Hebbian plasticity and trace the origins of WM theories and models building on
this form of associative learning.Comment: 12 pages, 2 figures, 1 box, submitte
Information Capacity of a Neural Network with Redundant Connections Between Neurons
© 2017 IEEE. In this work the model of a spiking recurrent neural network where any pair of neurons can form several connection lines (axons) with different spike propagation times is studied. Through simulation modeling, it has been shown that a neural network with redundant connections between neurons in the form of delay lines provides storage and playback of a significant number of independent temporal sequences of neural pulses. It has been suggested that multiple synaptic inputs from a single neuron in a natural neural network provide some of the information-processing properties of the network
On the information in spike timing: neural codes derived from polychronous groups
There is growing evidence regarding the importance of spike timing in neural
information processing, with even a small number of spikes carrying
information, but computational models lag significantly behind those for rate
coding. Experimental evidence on neuronal behavior is consistent with the
dynamical and state dependent behavior provided by recurrent connections. This
motivates the minimalistic abstraction investigated in this paper, aimed at
providing insight into information encoding in spike timing via recurrent
connections. We employ information-theoretic techniques for a simple reservoir
model which encodes input spatiotemporal patterns into a sparse neural code,
translating the polychronous groups introduced by Izhikevich into codewords on
which we can perform standard vector operations. We show that the distance
properties of the code are similar to those for (optimal) random codes. In
particular, the code meets benchmarks associated with both linear
classification and capacity, with the latter scaling exponentially with
reservoir size