19,597 research outputs found
SuperSpike: Supervised learning in multi-layer spiking neural networks
A vast majority of computation in the brain is performed by spiking neural
networks. Despite the ubiquity of such spiking, we currently lack an
understanding of how biological spiking neural circuits learn and compute
in-vivo, as well as how we can instantiate such capabilities in artificial
spiking circuits in-silico. Here we revisit the problem of supervised learning
in temporally coding multi-layer spiking neural networks. First, by using a
surrogate gradient approach, we derive SuperSpike, a nonlinear voltage-based
three factor learning rule capable of training multi-layer networks of
deterministic integrate-and-fire neurons to perform nonlinear computations on
spatiotemporal spike patterns. Second, inspired by recent results on feedback
alignment, we compare the performance of our learning rule under different
credit assignment strategies for propagating output errors to hidden units.
Specifically, we test uniform, symmetric and random feedback, finding that
simpler tasks can be solved with any type of feedback, while more complex tasks
require symmetric feedback. In summary, our results open the door to obtaining
a better scientific understanding of learning and computation in spiking neural
networks by advancing our ability to train them to solve nonlinear problems
involving transformations between different spatiotemporal spike-time patterns
Beta-rhythm oscillations and synchronization transition in network models of Izhikevich neurons: effect of topology and synaptic type
Despite their significant functional roles, beta-band oscillations are least
understood. Synchronization in neuronal networks have attracted much attention
in recent years with the main focus on transition type. Whether one obtains
explosive transition or a continuous transition is an important feature of the
neuronal network which can depend on network structure as well as synaptic
types. In this study we consider the effect of synaptic interaction (electrical
and chemical) as well as structural connectivity on synchronization transition
in network models of Izhikevich neurons which spike regularly with beta
rhythms. We find a wide range of behavior including continuous transition,
explosive transition, as well as lack of global order. The stronger electrical
synapses are more conducive to synchronization and can even lead to explosive
synchronization. The key network element which determines the order of
transition is found to be the clustering coefficient and not the small world
effect, or the existence of hubs in a network. These results are in contrast to
previous results which use phase oscillator models such as the Kuramoto model.
Furthermore, we show that the patterns of synchronization changes when one goes
to the gamma band. We attribute such a change to the change in the refractory
period of Izhikevich neurons which changes significantly with frequency.Comment: 7 figures, 1 tabl
Channel noise effects on neural synchronization
Synchronization in neural networks is strongly tied to the implementation of
cognitive processes, but abnormal neuronal synchronization has been linked to a
number of brain disorders such as epilepsy and schizophrenia. Here we examine
the effects of channel noise on the synchronization of small Hodgkin-Huxley
neuronal networks. The principal feature of a Hodgkin-Huxley neuron is the
existence of protein channels that transition between open and closed states
with voltage dependent rate constants. The Hodgkin-Huxley model assumes
infinitely many channels, so fluctuations in the number of open channels do not
affect the voltage. However, real neurons have finitely many channels which
lead to fluctuations in the membrane voltage and modify the timing of the
spikes, which may in turn lead to large changes in the degree of
synchronization. We demonstrate that under mild conditions, neurons in the
network reach a steady state synchronization level that depends only on the
number of neurons in the network. The channel noise only affects the time it
takes to reach the steady state synchronization level.Comment: 7 Figure
Storage of phase-coded patterns via STDP in fully-connected and sparse network: a study of the network capacity
We study the storage and retrieval of phase-coded patterns as stable
dynamical attractors in recurrent neural networks, for both an analog and a
integrate-and-fire spiking model. The synaptic strength is determined by a
learning rule based on spike-time-dependent plasticity, with an asymmetric time
window depending on the relative timing between pre- and post-synaptic
activity. We store multiple patterns and study the network capacity.
For the analog model, we find that the network capacity scales linearly with
the network size, and that both capacity and the oscillation frequency of the
retrieval state depend on the asymmetry of the learning time window. In
addition to fully-connected networks, we study sparse networks, where each
neuron is connected only to a small number z << N of other neurons. Connections
can be short range, between neighboring neurons placed on a regular lattice, or
long range, between randomly chosen pairs of neurons. We find that a small
fraction of long range connections is able to amplify the capacity of the
network. This imply that a small-world-network topology is optimal, as a
compromise between the cost of long range connections and the capacity
increase.
Also in the spiking integrate and fire model the crucial result of storing
and retrieval of multiple phase-coded patterns is observed. The capacity of the
fully-connected spiking network is investigated, together with the relation
between oscillation frequency of retrieval state and window asymmetry
Intrinsic adaptation in autonomous recurrent neural networks
A massively recurrent neural network responds on one side to input stimuli
and is autonomously active, on the other side, in the absence of sensory
inputs. Stimuli and information processing depends crucially on the qualia of
the autonomous-state dynamics of the ongoing neural activity. This default
neural activity may be dynamically structured in time and space, showing
regular, synchronized, bursting or chaotic activity patterns.
We study the influence of non-synaptic plasticity on the default dynamical
state of recurrent neural networks. The non-synaptic adaption considered acts
on intrinsic neural parameters, such as the threshold and the gain, and is
driven by the optimization of the information entropy. We observe, in the
presence of the intrinsic adaptation processes, three distinct and globally
attracting dynamical regimes, a regular synchronized, an overall chaotic and an
intermittent bursting regime. The intermittent bursting regime is characterized
by intervals of regular flows, which are quite insensitive to external stimuli,
interseeded by chaotic bursts which respond sensitively to input signals. We
discuss these finding in the context of self-organized information processing
and critical brain dynamics.Comment: 24 pages, 8 figure
- …