10 research outputs found
Signal Propagation in Feedforward Neuronal Networks with Unreliable Synapses
In this paper, we systematically investigate both the synfire propagation and
firing rate propagation in feedforward neuronal network coupled in an
all-to-all fashion. In contrast to most earlier work, where only reliable
synaptic connections are considered, we mainly examine the effects of
unreliable synapses on both types of neural activity propagation in this work.
We first study networks composed of purely excitatory neurons. Our results show
that both the successful transmission probability and excitatory synaptic
strength largely influence the propagation of these two types of neural
activities, and better tuning of these synaptic parameters makes the considered
network support stable signal propagation. It is also found that noise has
significant but different impacts on these two types of propagation. The
additive Gaussian white noise has the tendency to reduce the precision of the
synfire activity, whereas noise with appropriate intensity can enhance the
performance of firing rate propagation. Further simulations indicate that the
propagation dynamics of the considered neuronal network is not simply
determined by the average amount of received neurotransmitter for each neuron
in a time instant, but also largely influenced by the stochastic effect of
neurotransmitter release. Second, we compare our results with those obtained in
corresponding feedforward neuronal networks connected with reliable synapses
but in a random coupling fashion. We confirm that some differences can be
observed in these two different feedforward neuronal network models. Finally,
we study the signal propagation in feedforward neuronal networks consisting of
both excitatory and inhibitory neurons, and demonstrate that inhibition also
plays an important role in signal propagation in the considered networks.Comment: 33pages, 16 figures; Journal of Computational Neuroscience
(published
Decorrelation of neural-network activity by inhibitory feedback
Correlations in spike-train ensembles can seriously impair the encoding of
information by their spatio-temporal structure. An inevitable source of
correlation in finite neural networks is common presynaptic input to pairs of
neurons. Recent theoretical and experimental studies demonstrate that spike
correlations in recurrent neural networks are considerably smaller than
expected based on the amount of shared presynaptic input. By means of a linear
network model and simulations of networks of leaky integrate-and-fire neurons,
we show that shared-input correlations are efficiently suppressed by inhibitory
feedback. To elucidate the effect of feedback, we compare the responses of the
intact recurrent network and systems where the statistics of the feedback
channel is perturbed. The suppression of spike-train correlations and
population-rate fluctuations by inhibitory feedback can be observed both in
purely inhibitory and in excitatory-inhibitory networks. The effect is fully
understood by a linear theory and becomes already apparent at the macroscopic
level of the population averaged activity. At the microscopic level,
shared-input correlations are suppressed by spike-train correlations: In purely
inhibitory networks, they are canceled by negative spike-train correlations. In
excitatory-inhibitory networks, spike-train correlations are typically
positive. Here, the suppression of input correlations is not a result of the
mere existence of correlations between excitatory (E) and inhibitory (I)
neurons, but a consequence of a particular structure of correlations among the
three possible pairings (EE, EI, II)
Regulation of spike timing in visual cortical circuits
A train of action potentials (a spike train) can carry information in both the average firing rate and the pattern of spikes in the train. But can such a spike-pattern code be supported by cortical circuits? Neurons in vitro produce a spike pattern in response to the injection of a fluctuating current. However, cortical neurons in vivo are modulated by local oscillatory neuronal activity and by top-down inputs. In a cortical circuit, precise spike patterns thus reflect the interaction between internally generated activity and sensory information encoded by input spike trains. We review the evidence for precise and reliable spike timing in the cortex and discuss its computational role
Regulation of spike timing in visual cortical circuits
A train of action potentials (a spike train) can carry information in both the average firing rate and the pattern of spikes in the train. But can such a spike-pattern code be supported by cortical circuits? Neurons in vitro produce a spike pattern in response to the injection of a fluctuating current. However, cortical neurons in vivo are modulated by local oscillatory neuronal activity and by top-down inputs. In a cortical circuit, precise spike patterns thus reflect the interaction between internally generated activity and sensory information encoded by input spike trains. We review the evidence for precise and reliable spike timing in the cortex and discuss its computational role
Memory replay in balanced recurrent networks
Complex patterns of neural activity appear during up-states in the neocortex and sharp waves in the hippocampus, including sequences that resemble those during prior behavioral experience. The mechanisms underlying this replay are not well understood. How can small synaptic footprints engraved by experience control large-scale network activity during memory retrieval and consolidation? We hypothesize that sparse and weak synaptic connectivity between Hebbian assemblies are boosted by pre-existing recurrent connectivity within them. To investigate this idea, we connect sequences of assemblies in randomly connected spiking neuronal networks with a balance of excitation and inhibition. Simulations and analytical calculations show that recurrent connections within assemblies allow for a fast amplification of signals that indeed reduces the required number of inter-assembly connections. Replay can be evoked by small sensory-like cues or emerge spontaneously by activity fluctuations. Global—potentially neuromodulatory—alterations of neuronal excitability can switch between network states that favor retrieval and consolidation.BMBF, 01GQ1001A, Verbundprojekt: Bernstein Zentrum für Computational Neuroscience, Berlin - "Präzision und Variabilität" - Teilprojekt A2, A3, A4, A8, B6, Zentralprojekt und ProfessurBMBF, 01GQ0972, Verbundprojekt: Bernstein Fokus Lernen - Zustandsabhängigkeit des Lernens, TP 2 und 3BMBF, 01GQ1201, Lernen und Gedächtnis in balancierten SystemenDFG, 103586207, GRK 1589: Verarbeitung sensorischer Informationen in neuronalen Systeme
Recommended from our members
Timing in the cerebellum : a matter of network inhibition
textThe motor functions of an animal require precisely timed and coordinated sequences of movements. The cerebellum is crucial for performing these functions with precision. To investigate cerebellar computations involved in precise motor movements, behavioral paradigms such as delay eyelid conditioning have been used. Delay eyelid conditioning trains an animal to close its eye in response to a previously neutral stimulus. The timing of the eyelid closure responses suggests that the cerebellum is capable of keeping track of the elapsed time since the onset of the stimulus. This dissertation proposes a network mechanism for cerebellar timing based on biologically informed simulations of the cerebellum. In chapter 2, a simulation with over a million cells is described. This simulation approaches the observed cerebellar connectivity in several well studied mammals. Graphics processing units (GPUs) provide the computational power necessary to perform this simulation at a practical speed. This chapter describes simulation algorithms that efficiently utilize GPUs. In
chapter 3, the simulation is used to explore cerebellar timing mechanisms. The lateral inhibition among cerebellar Golgi cells is observed to be a potential mechanism for robust timing. Lateral Golgi inhibition enables the simulation to better replicate animal eyelid conditioning behavior for longer inter-stimulus intervals. In chapter 4, the emergent network mechanisms of lateral Golgi inhibition are analyzed by decomposing the network into its individual components. This component analysis demonstrates that nonreciprocal connectivity (where one Golgi cell inhibits another but does not receive inhibition in return) is useful for timing. Specifically, removing nonreciprocal connectivity greatly degrades the simulation's ability to keep track of time. This implies that the aforementioned component analyses are relevant to the emergent timing mechanisms of the network. Finally, in chapter 5, this dissertation discusses the relevance and limitations of the computational approach, biological predictions, and component analysis presented in previous chapters.Neuroscienc
Oscillatory mechanisms for controlling information flow in neural circuits
Mammalian brains generate complex, dynamic structures of oscillatory activity, in which
distributed regions transiently engage in coherent oscillation, often at specific stages in behavioural
or cognitive tasks. Much is now known about the dynamics underlying local circuit
synchronisation and the phenomenology of where and when such activity occurs. While
oscillations have been implicated in many high level processes, for most such phenomena we
cannot say with confidence precisely what they are doing at an algorithmic or implementational
level. This thesis presents work towards understanding the dynamics and possible function of large
scale oscillatory network activity. We first address the question of how coherent oscillatory activity
emerges between local networks by measuring phase response curves of an oscillating network in
vitro. The network phase response curves provide mechanistic insight into inter-region
synchronisation of local network oscillators. Highly simplified firing models are shown to
reproduce the experimental data with remarkable accuracy. We then focus on one hypothesised
computational function of network oscillations; flexibly controlling the gain of signal flow between
anatomically connected networks. We investigate coding strategies and algorithmic operations that
support flexible control of signal flow by oscillations, and their implementation by network
dynamics. We identify two readout algorithms which selectively recover population rate coded
signal with specific oscillatory modulations while ignoring other distracting inputs. By designing a
spiking network model that implements one of these mechanisms, we demonstrate oscillatory
control of signal flow in convergent pathways. We then investigate constraints on the structures of
oscillatory activity that can be used to accurately and selectively control signal flow. Our results
suggest that for inputs to be accurately distinguished from one another their oscillatory modulations
must be close to orthogonal. This has implications for interpreting in vivo oscillatory activity, and
may be an organising principle for the spatio-temporal structure of brain oscillations