17 research outputs found
Sparse and Dense Encoding in Layered Associative Network of Spiking Neurons
A synfire chain is a simple neural network model which can propagate stable
synchronous spikes called a pulse packet and widely researched. However how
synfire chains coexist in one network remains to be elucidated. We have studied
the activity of a layered associative network of Leaky Integrate-and-Fire
neurons in which connection we embed memory patterns by the Hebbian Learning.
We analyzed their activity by the Fokker-Planck method. In our previous report,
when a half of neurons belongs to each memory pattern (memory pattern rate
), the temporal profiles of the network activity is split into
temporally clustered groups called sublattices under certain input conditions.
In this study, we show that when the network is sparsely connected (),
synchronous firings of the memory pattern are promoted. On the contrary, the
densely connected network () inhibit synchronous firings. The sparseness
and denseness also effect the basin of attraction and the storage capacity of
the embedded memory patterns. We show that the sparsely(densely) connected
networks enlarge(shrink) the basion of attraction and increase(decrease) the
storage capacity
Signal Propagation in Feedforward Neuronal Networks with Unreliable Synapses
In this paper, we systematically investigate both the synfire propagation and
firing rate propagation in feedforward neuronal network coupled in an
all-to-all fashion. In contrast to most earlier work, where only reliable
synaptic connections are considered, we mainly examine the effects of
unreliable synapses on both types of neural activity propagation in this work.
We first study networks composed of purely excitatory neurons. Our results show
that both the successful transmission probability and excitatory synaptic
strength largely influence the propagation of these two types of neural
activities, and better tuning of these synaptic parameters makes the considered
network support stable signal propagation. It is also found that noise has
significant but different impacts on these two types of propagation. The
additive Gaussian white noise has the tendency to reduce the precision of the
synfire activity, whereas noise with appropriate intensity can enhance the
performance of firing rate propagation. Further simulations indicate that the
propagation dynamics of the considered neuronal network is not simply
determined by the average amount of received neurotransmitter for each neuron
in a time instant, but also largely influenced by the stochastic effect of
neurotransmitter release. Second, we compare our results with those obtained in
corresponding feedforward neuronal networks connected with reliable synapses
but in a random coupling fashion. We confirm that some differences can be
observed in these two different feedforward neuronal network models. Finally,
we study the signal propagation in feedforward neuronal networks consisting of
both excitatory and inhibitory neurons, and demonstrate that inhibition also
plays an important role in signal propagation in the considered networks.Comment: 33pages, 16 figures; Journal of Computational Neuroscience
(published
Memory replay in balanced recurrent networks
Complex patterns of neural activity appear during up-states in the neocortex and sharp waves in the hippocampus, including sequences that resemble those during prior behavioral experience. The mechanisms underlying this replay are not well understood. How can small synaptic footprints engraved by experience control large-scale network activity during memory retrieval and consolidation? We hypothesize that sparse and weak synaptic connectivity between Hebbian assemblies are boosted by pre-existing recurrent connectivity within them. To investigate this idea, we connect sequences of assemblies in randomly connected spiking neuronal networks with a balance of excitation and inhibition. Simulations and analytical calculations show that recurrent connections within assemblies allow for a fast amplification of signals that indeed reduces the required number of inter-assembly connections. Replay can be evoked by small sensory-like cues or emerge spontaneously by activity fluctuations. Global—potentially neuromodulatory—alterations of neuronal excitability can switch between network states that favor retrieval and consolidation.BMBF, 01GQ1001A, Verbundprojekt: Bernstein Zentrum für Computational Neuroscience, Berlin - "Präzision und Variabilität" - Teilprojekt A2, A3, A4, A8, B6, Zentralprojekt und ProfessurBMBF, 01GQ0972, Verbundprojekt: Bernstein Fokus Lernen - Zustandsabhängigkeit des Lernens, TP 2 und 3BMBF, 01GQ1201, Lernen und Gedächtnis in balancierten SystemenDFG, 103586207, GRK 1589: Verarbeitung sensorischer Informationen in neuronalen Systeme
Functional relevance of inhibitory and disinhibitory circuits in signal propagation in recurrent neuronal networks
Cell assemblies are considered to be physiological as well as functional units in
the brain. A repetitive and stereotypical sequential activation of many neurons was
observed, but the mechanisms underlying it are not well understood. Feedforward networks,
such as synfire chains, with the pools of excitatory neurons unidirectionally
connected and facilitating signal transmission in a cascade-like fashion were proposed
to model such sequential activity. When embedded in a recurrent network, these were
shown to destabilise the whole network’s activity, challenging the suitability of the
model. Here, we investigate a feedforward chain of excitatory pools enriched by inhibitory
pools that provide disynaptic feedforward inhibition. We show that when
embedded in a recurrent network of spiking neurons, such an augmented chain is capable
of robust signal propagation. We then investigate the influence of overlapping
two chains on the signal transmission as well as the stability of the host network. While
shared excitatory pools turn out to be detrimental to global stability, inhibitory overlap
implicitly realises the motif of lateral inhibition, which, if moderate, maintains
the stability but if substantial, it silences the whole network activity including the signal.
Addition of a disinhibitory pathway along the chain proves to rescue the signal
transmission by transforming a strong inhibitory wave into a disinhibitory one, which
specifically guards the excitatory pools from receiving excessive inhibition and thereby
allowing them to remain responsive to the forthcoming activation. Disinhibitory circuits
not only improve the signal transmission, but can also control it via a gating mechanism.
We demonstrate that by manipulating a firing threshold of the disinhibitory neurons,
the signal transmission can be enabled or completely blocked. This mechanism
corresponds to cholinergic modulation, which was shown to be signalled by volume
as well as phasic transmission and variably target classes of neurons. Furthermore,
we show that modulation of the feedforward inhibition circuit can promote generating
spontaneous replay at the absence of external inputs. This mechanism, however, tends
to also cause global instabilities.
Overall, these results underscore the importance of inhibitory neuron populations
in controlling signal propagation in cell assemblies as well as global stability. Specific
inhibitory circuits, when controlled by neuromodulatory systems, can robustly guide or
block the signals and invoke replay. This mounts to evidence that the population of interneurons
is diverse and can be best categorised by neurons’ specific circuit functions
as well as their responsiveness to neuromodulators
Noise Suppression and Surplus Synchrony by Coincidence Detection
The functional significance of correlations between action potentials of
neurons is still a matter of vivid debates. In particular it is presently
unclear how much synchrony is caused by afferent synchronized events and how
much is intrinsic due to the connectivity structure of cortex. The available
analytical approaches based on the diffusion approximation do not allow to
model spike synchrony, preventing a thorough analysis. Here we theoretically
investigate to what extent common synaptic afferents and synchronized inputs
each contribute to closely time-locked spiking activity of pairs of neurons. We
employ direct simulation and extend earlier analytical methods based on the
diffusion approximation to pulse-coupling, allowing us to introduce precisely
timed correlations in the spiking activity of the synaptic afferents. We
investigate the transmission of correlated synaptic input currents by pairs of
integrate-and-fire model neurons, so that the same input covariance can be
realized by common inputs or by spiking synchrony. We identify two distinct
regimes: In the limit of low correlation linear perturbation theory accurately
determines the correlation transmission coefficient, which is typically smaller
than unity, but increases sensitively even for weakly synchronous inputs. In
the limit of high afferent correlation, in the presence of synchrony a
qualitatively new picture arises. As the non-linear neuronal response becomes
dominant, the output correlation becomes higher than the total correlation in
the input. This transmission coefficient larger unity is a direct consequence
of non-linear neural processing in the presence of noise, elucidating how
synchrony-coded signals benefit from these generic properties present in
cortical networks
Form vs. Function: Theory and Models for Neuronal Substrates
The quest for endowing form with function represents the fundamental motivation behind all neural network modeling. In this thesis, we discuss various functional neuronal architectures and their implementation in silico, both on conventional computer systems and on neuromorpic devices. Necessarily, such casting to a particular substrate will constrain their form, either by requiring a simplified description of neuronal dynamics and interactions or by imposing physical limitations on important characteristics such as network connectivity or parameter precision. While our main focus lies on the computational properties of the studied models, we augment our discussion with rigorous mathematical formalism. We start by investigating the behavior of point neurons under synaptic bombardment and provide analytical predictions of single-unit and ensemble statistics. These considerations later become useful when moving to the functional network level, where we study the effects of an imperfect physical substrate on the computational properties of several cortical networks. Finally, we return to the single neuron level to discuss a novel interpretation of spiking activity in the context of probabilistic inference through sampling. We provide analytical derivations for the translation of this ``neural sampling'' framework to networks of biologically plausible and hardware-compatible neurons and later take this concept beyond the realm of brain science when we discuss applications in machine learning and analogies to solid-state systems