13,956 research outputs found
Membrane resonance enables stable and robust gamma oscillations
Neuronal mechanisms underlying beta/gamma oscillations (20-80 Hz) are not completely understood. Here, we show that in vivo beta/gamma oscillations in the cat visual cortex sometimes exhibit remarkably stable frequency even when inputs fluctuate dramatically. Enhanced frequency stability is associated with stronger oscillations measured in individual units and larger power in the local field potential. Simulations of neuronal circuitry demonstrate that membrane properties of inhibitory interneurons strongly determine the characteristics of emergent oscillations. Exploration of networks containing either integrator or resonator inhibitory interneurons revealed that: (i) Resonance, as opposed to integration, promotes robust oscillations with large power and stable frequency via a mechanism called RING (Resonance INduced Gamma); resonance favors synchronization by reducing phase delays between interneurons and imposes bounds on oscillation cycle duration; (ii) Stability of frequency and robustness of the oscillation also depend on the relative timing of excitatory and inhibitory volleys within the oscillation cycle; (iii) RING can reproduce characteristics of both Pyramidal INterneuron Gamma (PING) and INterneuron Gamma (ING), transcending such classifications; (iv) In RING, robust gamma oscillations are promoted by slow but are impaired by fast inputs. Results suggest that interneuronal membrane resonance can be an important ingredient for generation of robust gamma oscillations having stable frequency
Decorrelation of neural-network activity by inhibitory feedback
Correlations in spike-train ensembles can seriously impair the encoding of
information by their spatio-temporal structure. An inevitable source of
correlation in finite neural networks is common presynaptic input to pairs of
neurons. Recent theoretical and experimental studies demonstrate that spike
correlations in recurrent neural networks are considerably smaller than
expected based on the amount of shared presynaptic input. By means of a linear
network model and simulations of networks of leaky integrate-and-fire neurons,
we show that shared-input correlations are efficiently suppressed by inhibitory
feedback. To elucidate the effect of feedback, we compare the responses of the
intact recurrent network and systems where the statistics of the feedback
channel is perturbed. The suppression of spike-train correlations and
population-rate fluctuations by inhibitory feedback can be observed both in
purely inhibitory and in excitatory-inhibitory networks. The effect is fully
understood by a linear theory and becomes already apparent at the macroscopic
level of the population averaged activity. At the microscopic level,
shared-input correlations are suppressed by spike-train correlations: In purely
inhibitory networks, they are canceled by negative spike-train correlations. In
excitatory-inhibitory networks, spike-train correlations are typically
positive. Here, the suppression of input correlations is not a result of the
mere existence of correlations between excitatory (E) and inhibitory (I)
neurons, but a consequence of a particular structure of correlations among the
three possible pairings (EE, EI, II)
State-Dependent Computation Using Coupled Recurrent Networks
Although conditional branching between possible behavioral states is a hallmark of intelligent behavior, very little is known about the neuronal mechanisms that support this processing. In a step toward solving this problem, we demonstrate by theoretical analysis and simulation how
networks of richly interconnected neurons, such as those observed in the superficial layers of the neocortex, can embed reliable, robust finite state machines. We show how a multistable neuronal network containing a number of states can be created very simply by coupling two recurrent
networks whose synaptic weights have been configured for soft winner-take-all (sWTA) performance. These two sWTAs have simple, homogeneous, locally recurrent connectivity except for a small fraction of recurrent cross-connections between them, which are used to embed the required states. This coupling between the maps allows the network to continue to express the current state even after the input that elicited that state iswithdrawn. In addition, a small number of transition neurons implement the necessary input-driven transitions between the embedded states. We provide simple rules to systematically design and construct neuronal state machines of this kind. The significance of our finding is that it offers a method whereby the cortex could construct networks supporting a broad range of sophisticated processing by applying only small specializations to the same generic neuronal circuit
Emergence of Functional Specificity in Balanced Networks with Synaptic Plasticity
In rodent visual cortex, synaptic connections between orientation-selective neurons are unspecific at the time of eye opening, and become to some degree functionally specific only later during development. An explanation for this two-stage process was proposed in terms of Hebbian plasticity based on visual experience that would eventually enhance connections between neurons with similar response features. For this to work, however, two conditions must be satisfied: First, orientation selective neuronal responses must exist before specific recurrent synaptic connections can be established. Second, Hebbian learning must be compatible with the recurrent network dynamics contributing to orientation selectivity, and the resulting specific connectivity must remain stable for unspecific background activity. Previous studies have mainly focused on very simple models, where the receptive fields of neurons were essentially determined by feedforward mechanisms, and where the recurrent network was small, lacking the complex recurrent dynamics of large-scale networks of excitatory and inhibitory neurons. Here we studied the emergence of functionally specific connectivity in large-scale recurrent networks with synaptic plasticity. Our results show that balanced random networks, which already exhibit highly selective responses at eye opening, can develop feature-specific connectivity if appropriate rules of synaptic plasticity are invoked within and between excitatory and inhibitory populations. If these conditions are met, the initial orientation selectivity guides the process of Hebbian learning and, as a result, functionally specific and a surplus of bidirectional connections emerge. Our results thus demonstrate the cooperation of synaptic plasticity and recurrent dynamics in large-scale functional networks with realistic receptive fields, highlight the role of inhibition as a critical element in this process, and paves the road for further computational studies of sensory processing in neocortical network models equipped with synaptic plasticity
A mean-field model for conductance-based networks of adaptive exponential integrate-and-fire neurons
Voltage-sensitive dye imaging (VSDi) has revealed fundamental properties of
neocortical processing at mesoscopic scales. Since VSDi signals report the
average membrane potential, it seems natural to use a mean-field formalism to
model such signals. Here, we investigate a mean-field model of networks of
Adaptive Exponential (AdEx) integrate-and-fire neurons, with conductance-based
synaptic interactions. The AdEx model can capture the spiking response of
different cell types, such as regular-spiking (RS) excitatory neurons and
fast-spiking (FS) inhibitory neurons. We use a Master Equation formalism,
together with a semi-analytic approach to the transfer function of AdEx
neurons. We compare the predictions of this mean-field model to simulated
networks of RS-FS cells, first at the level of the spontaneous activity of the
network, which is well predicted by the mean-field model. Second, we
investigate the response of the network to time-varying external input, and
show that the mean-field model accurately predicts the response time course of
the population. One notable exception was that the "tail" of the response at
long times was not well predicted, because the mean-field does not include
adaptation mechanisms. We conclude that the Master Equation formalism can yield
mean-field models that predict well the behavior of nonlinear networks with
conductance-based interactions and various electrophysiolgical properties, and
should be a good candidate to model VSDi signals where both excitatory and
inhibitory neurons contribute.Comment: 21 pages, 7 figure
How adaptation currents change threshold, gain and variability of neuronal spiking
Many types of neurons exhibit spike rate adaptation, mediated by intrinsic
slow -currents, which effectively inhibit neuronal responses. How
these adaptation currents change the relationship between in-vivo like
fluctuating synaptic input, spike rate output and the spike train statistics,
however, is not well understood. In this computational study we show that an
adaptation current which primarily depends on the subthreshold membrane voltage
changes the neuronal input-output relationship (I-O curve) subtractively,
thereby increasing the response threshold. A spike-dependent adaptation current
alters the I-O curve divisively, thus reducing the response gain. Both types of
adaptation currents naturally increase the mean inter-spike interval (ISI), but
they can affect ISI variability in opposite ways. A subthreshold current always
causes an increase of variability while a spike-triggered current decreases
high variability caused by fluctuation-dominated inputs and increases low
variability when the average input is large. The effects on I-O curves match
those caused by synaptic inhibition in networks with asynchronous irregular
activity, for which we find subtractive and divisive changes caused by external
and recurrent inhibition, respectively. Synaptic inhibition, however, always
increases the ISI variability. We analytically derive expressions for the I-O
curve and ISI variability, which demonstrate the robustness of our results.
Furthermore, we show how the biophysical parameters of slow
-conductances contribute to the two different types of adaptation
currents and find that -activated -currents are
effectively captured by a simple spike-dependent description, while
muscarine-sensitive or -activated -currents show a
dominant subthreshold component.Comment: 20 pages, 8 figures; Journal of Neurophysiology (in press
Transient Information Flow in a Network of Excitatory and Inhibitory Model Neurons: Role of Noise and Signal Autocorrelation
We investigate the performance of sparsely-connected networks of
integrate-and-fire neurons for ultra-short term information processing. We
exploit the fact that the population activity of networks with balanced
excitation and inhibition can switch from an oscillatory firing regime to a
state of asynchronous irregular firing or quiescence depending on the rate of
external background spikes.
We find that in terms of information buffering the network performs best for
a moderate, non-zero, amount of noise. Analogous to the phenomenon of
stochastic resonance the performance decreases for higher and lower noise
levels. The optimal amount of noise corresponds to the transition zone between
a quiescent state and a regime of stochastic dynamics. This provides a
potential explanation on the role of non-oscillatory population activity in a
simplified model of cortical micro-circuits.Comment: 27 pages, 7 figures, to appear in J. Physiology (Paris) Vol. 9
Computational paradigm for dynamic logic-gates in neuronal activity
In 1943 McCulloch and Pitts suggested that the brain is composed of reliable
logic-gates similar to the logic at the core of today's computers. This
framework had a limited impact on neuroscience, since neurons exhibit far
richer dynamics. Here we propose a new experimentally corroborated paradigm in
which the truth tables of the brain's logic-gates are time dependent, i.e.
dynamic logicgates (DLGs). The truth tables of the DLGs depend on the history
of their activity and the stimulation frequencies of their input neurons. Our
experimental results are based on a procedure where conditioned stimulations
were enforced on circuits of neurons embedded within a large-scale network of
cortical cells in-vitro. We demonstrate that the underlying biological
mechanism is the unavoidable increase of neuronal response latencies to ongoing
stimulations, which imposes a nonuniform gradual stretching of network delays.
The limited experimental results are confirmed and extended by simulations and
theoretical arguments based on identical neurons with a fixed increase of the
neuronal response latency per evoked spike. We anticipate our results to lead
to better understanding of the suitability of this computational paradigm to
account for the brain's functionalities and will require the development of new
systematic mathematical methods beyond the methods developed for traditional
Boolean algebra.Comment: 32 pages, 14 figures, 1 tabl
- …