129 research outputs found
Sleep-like slow oscillations improve visual classification through synaptic homeostasis and memory association in a thalamo-cortical model
The occurrence of sleep passed through the evolutionary sieve and is
widespread in animal species. Sleep is known to be beneficial to cognitive and
mnemonic tasks, while chronic sleep deprivation is detrimental. Despite the
importance of the phenomenon, a complete understanding of its functions and
underlying mechanisms is still lacking. In this paper, we show interesting
effects of deep-sleep-like slow oscillation activity on a simplified
thalamo-cortical model which is trained to encode, retrieve and classify images
of handwritten digits. During slow oscillations,
spike-timing-dependent-plasticity (STDP) produces a differential homeostatic
process. It is characterized by both a specific unsupervised enhancement of
connections among groups of neurons associated to instances of the same class
(digit) and a simultaneous down-regulation of stronger synapses created by the
training. This hierarchical organization of post-sleep internal representations
favours higher performances in retrieval and classification tasks. The
mechanism is based on the interaction between top-down cortico-thalamic
predictions and bottom-up thalamo-cortical projections during deep-sleep-like
slow oscillations. Indeed, when learned patterns are replayed during sleep,
cortico-thalamo-cortical connections favour the activation of other neurons
coding for similar thalamic inputs, promoting their association. Such mechanism
hints at possible applications to artificial learning systems.Comment: 11 pages, 5 figures, v5 is the final version published on Scientific
Reports journa
Towards biologically plausible Dreaming and Planning in recurrent spiking networks
Humans and animals can learn new skills after practicing for a few hours,
while current reinforcement learning algorithms require a large amount of data
to achieve good performances. Recent model-based approaches show promising
results by reducing the number of necessary interactions with the environment
to learn a desirable policy. However, these methods require biological
implausible ingredients, such as the detailed storage of older experiences, and
long periods of offline learning. The optimal way to learn and exploit
word-models is still an open question. Taking inspiration from biology, we
suggest that dreaming might be an efficient expedient to use an inner model. We
propose a two-module (agent and model) spiking neural network in which
"dreaming" (living new experiences in a model-based simulated environment)
significantly boosts learning. We also explore "planning", an online
alternative to dreaming, that shows comparable performances. Importantly, our
model does not require the detailed storage of experiences, and learns online
the world-model and the policy. Moreover, we stress that our network is
composed of spiking neurons, further increasing the biological plausibility and
implementability in neuromorphic hardware
Beyond spiking networks: the computational advantages of dendritic amplification and input segregation
The brain can efficiently learn a wide range of tasks, motivating the search
for biologically inspired learning rules for improving current artificial
intelligence technology. Most biological models are composed of point neurons,
and cannot achieve the state-of-the-art performances in machine learning.
Recent works have proposed that segregation of dendritic input (neurons receive
sensory information and higher-order feedback in segregated compartments) and
generation of high-frequency bursts of spikes would support error
backpropagation in biological neurons. However, these approaches require
propagating errors with a fine spatio-temporal structure to the neurons, which
is unlikely to be feasible in a biological network.
To relax this assumption, we suggest that bursts and dendritic input
segregation provide a natural support for biologically plausible target-based
learning, which does not require error propagation. We propose a pyramidal
neuron model composed of three separated compartments. A coincidence mechanism
between the basal and the apical compartments allows for generating
high-frequency bursts of spikes. This architecture allows for a burst-dependent
learning rule, based on the comparison between the target bursting activity
triggered by the teaching signal and the one caused by the recurrent
connections, providing the support for target-based learning. We show that this
framework can be used to efficiently solve spatio-temporal tasks, such as the
store and recall of 3D trajectories.
Finally, we suggest that this neuronal architecture naturally allows for
orchestrating ``hierarchical imitation learning'', enabling the decomposition
of challenging long-horizon decision-making tasks into simpler subtasks. This
can be implemented in a two-level network, where the high-network acts as a
``manager'' and produces the contextual signal for the low-network, the
``worker''.Comment: arXiv admin note: substantial text overlap with arXiv:2201.1171
Scaling of a large-scale simulation of synchronous slow-wave and asynchronous awake-like activity of a cortical model with long-range interconnections
Cortical synapse organization supports a range of dynamic states on multiple
spatial and temporal scales, from synchronous slow wave activity (SWA),
characteristic of deep sleep or anesthesia, to fluctuating, asynchronous
activity during wakefulness (AW). Such dynamic diversity poses a challenge for
producing efficient large-scale simulations that embody realistic metaphors of
short- and long-range synaptic connectivity. In fact, during SWA and AW
different spatial extents of the cortical tissue are active in a given timespan
and at different firing rates, which implies a wide variety of loads of local
computation and communication. A balanced evaluation of simulation performance
and robustness should therefore include tests of a variety of cortical dynamic
states. Here, we demonstrate performance scaling of our proprietary Distributed
and Plastic Spiking Neural Networks (DPSNN) simulation engine in both SWA and
AW for bidimensional grids of neural populations, which reflects the modular
organization of the cortex. We explored networks up to 192x192 modules, each
composed of 1250 integrate-and-fire neurons with spike-frequency adaptation,
and exponentially decaying inter-modular synaptic connectivity with varying
spatial decay constant. For the largest networks the total number of synapses
was over 70 billion. The execution platform included up to 64 dual-socket
nodes, each socket mounting 8 Intel Xeon Haswell processor cores @ 2.40GHz
clock rates. Network initialization time, memory usage, and execution time
showed good scaling performances from 1 to 1024 processes, implemented using
the standard Message Passing Interface (MPI) protocol. We achieved simulation
speeds of between 2.3x10^9 and 4.1x10^9 synaptic events per second for both
cortical states in the explored range of inter-modular interconnections.Comment: 22 pages, 9 figures, 4 table
- …