7,206 research outputs found
How adaptation currents change threshold, gain and variability of neuronal spiking
Many types of neurons exhibit spike rate adaptation, mediated by intrinsic
slow -currents, which effectively inhibit neuronal responses. How
these adaptation currents change the relationship between in-vivo like
fluctuating synaptic input, spike rate output and the spike train statistics,
however, is not well understood. In this computational study we show that an
adaptation current which primarily depends on the subthreshold membrane voltage
changes the neuronal input-output relationship (I-O curve) subtractively,
thereby increasing the response threshold. A spike-dependent adaptation current
alters the I-O curve divisively, thus reducing the response gain. Both types of
adaptation currents naturally increase the mean inter-spike interval (ISI), but
they can affect ISI variability in opposite ways. A subthreshold current always
causes an increase of variability while a spike-triggered current decreases
high variability caused by fluctuation-dominated inputs and increases low
variability when the average input is large. The effects on I-O curves match
those caused by synaptic inhibition in networks with asynchronous irregular
activity, for which we find subtractive and divisive changes caused by external
and recurrent inhibition, respectively. Synaptic inhibition, however, always
increases the ISI variability. We analytically derive expressions for the I-O
curve and ISI variability, which demonstrate the robustness of our results.
Furthermore, we show how the biophysical parameters of slow
-conductances contribute to the two different types of adaptation
currents and find that -activated -currents are
effectively captured by a simple spike-dependent description, while
muscarine-sensitive or -activated -currents show a
dominant subthreshold component.Comment: 20 pages, 8 figures; Journal of Neurophysiology (in press
Decorrelation of neural-network activity by inhibitory feedback
Correlations in spike-train ensembles can seriously impair the encoding of
information by their spatio-temporal structure. An inevitable source of
correlation in finite neural networks is common presynaptic input to pairs of
neurons. Recent theoretical and experimental studies demonstrate that spike
correlations in recurrent neural networks are considerably smaller than
expected based on the amount of shared presynaptic input. By means of a linear
network model and simulations of networks of leaky integrate-and-fire neurons,
we show that shared-input correlations are efficiently suppressed by inhibitory
feedback. To elucidate the effect of feedback, we compare the responses of the
intact recurrent network and systems where the statistics of the feedback
channel is perturbed. The suppression of spike-train correlations and
population-rate fluctuations by inhibitory feedback can be observed both in
purely inhibitory and in excitatory-inhibitory networks. The effect is fully
understood by a linear theory and becomes already apparent at the macroscopic
level of the population averaged activity. At the microscopic level,
shared-input correlations are suppressed by spike-train correlations: In purely
inhibitory networks, they are canceled by negative spike-train correlations. In
excitatory-inhibitory networks, spike-train correlations are typically
positive. Here, the suppression of input correlations is not a result of the
mere existence of correlations between excitatory (E) and inhibitory (I)
neurons, but a consequence of a particular structure of correlations among the
three possible pairings (EE, EI, II)
Maximum likelihood decoding of neuronal inputs from an interspike interval distribution
An expression for the probability distribution of the interspike interval
of a leaky integrate-and-fire (LIF) model neuron is rigorously derived,
based on recent theoretical developments in the theory of stochastic processes.
This enables us to find for the first time a way of developing
maximum likelihood estimates (MLE) of the input information (e.g., afferent
rate and variance) for an LIF neuron from a set of recorded spike
trains. Dynamic inputs to pools of LIF neurons both with and without
interactions are efficiently and reliably decoded by applying the MLE,
even within time windows as short as 25 msec
Intrinsically-generated fluctuating activity in excitatory-inhibitory networks
Recurrent networks of non-linear units display a variety of dynamical regimes
depending on the structure of their synaptic connectivity. A particularly
remarkable phenomenon is the appearance of strongly fluctuating, chaotic
activity in networks of deterministic, but randomly connected rate units. How
this type of intrinsi- cally generated fluctuations appears in more realistic
networks of spiking neurons has been a long standing question. To ease the
comparison between rate and spiking networks, recent works investigated the
dynami- cal regimes of randomly-connected rate networks with segregated
excitatory and inhibitory populations, and firing rates constrained to be
positive. These works derived general dynamical mean field (DMF) equations
describing the fluctuating dynamics, but solved these equations only in the
case of purely inhibitory networks. Using a simplified excitatory-inhibitory
architecture in which DMF equations are more easily tractable, here we show
that the presence of excitation qualitatively modifies the fluctuating activity
compared to purely inhibitory networks. In presence of excitation,
intrinsically generated fluctuations induce a strong increase in mean firing
rates, a phenomenon that is much weaker in purely inhibitory networks.
Excitation moreover induces two different fluctuating regimes: for moderate
overall coupling, recurrent inhibition is sufficient to stabilize fluctuations,
for strong coupling, firing rates are stabilized solely by the upper bound
imposed on activity, even if inhibition is stronger than excitation. These
results extend to more general network architectures, and to rate networks
receiving noisy inputs mimicking spiking activity. Finally, we show that
signatures of the second dynamical regime appear in networks of
integrate-and-fire neurons
Transient Information Flow in a Network of Excitatory and Inhibitory Model Neurons: Role of Noise and Signal Autocorrelation
We investigate the performance of sparsely-connected networks of
integrate-and-fire neurons for ultra-short term information processing. We
exploit the fact that the population activity of networks with balanced
excitation and inhibition can switch from an oscillatory firing regime to a
state of asynchronous irregular firing or quiescence depending on the rate of
external background spikes.
We find that in terms of information buffering the network performs best for
a moderate, non-zero, amount of noise. Analogous to the phenomenon of
stochastic resonance the performance decreases for higher and lower noise
levels. The optimal amount of noise corresponds to the transition zone between
a quiescent state and a regime of stochastic dynamics. This provides a
potential explanation on the role of non-oscillatory population activity in a
simplified model of cortical micro-circuits.Comment: 27 pages, 7 figures, to appear in J. Physiology (Paris) Vol. 9
A computational study on altered theta-gamma coupling during learning and phase coding
There is considerable interest in the role of coupling between theta and gamma oscillations in the brain in the context of learning and memory. Here we have used a neural network model which is capable of producing coupling of theta phase to gamma amplitude firstly to explore its ability to reproduce reported learning changes and secondly to memory-span and phase coding effects. The spiking neural network incorporates two kinetically different GABAA receptor-mediated currents to generate both theta and gamma rhythms and we have found that by selective alteration of both NMDA receptors and GABAA,slow receptors it can reproduce learning-related changes in the strength of coupling between theta and gamma either with or without coincident changes in theta amplitude. When the model was used to explore the relationship between theta and gamma oscillations, working memory capacity and phase coding it showed that the potential storage capacity of short term memories, in terms of nested gamma-subcycles, coincides with the maximal theta power. Increasing theta power is also related to the precision of theta phase which functions as a potential timing clock for neuronal firing in the cortex or hippocampus
Learning to Recognize Actions from Limited Training Examples Using a Recurrent Spiking Neural Model
A fundamental challenge in machine learning today is to build a model that
can learn from few examples. Here, we describe a reservoir based spiking neural
model for learning to recognize actions with a limited number of labeled
videos. First, we propose a novel encoding, inspired by how microsaccades
influence visual perception, to extract spike information from raw video data
while preserving the temporal correlation across different frames. Using this
encoding, we show that the reservoir generalizes its rich dynamical activity
toward signature action/movements enabling it to learn from few training
examples. We evaluate our approach on the UCF-101 dataset. Our experiments
demonstrate that our proposed reservoir achieves 81.3%/87% Top-1/Top-5
accuracy, respectively, on the 101-class data while requiring just 8 video
examples per class for training. Our results establish a new benchmark for
action recognition from limited video examples for spiking neural models while
yielding competetive accuracy with respect to state-of-the-art non-spiking
neural models.Comment: 13 figures (includes supplementary information
Six networks on a universal neuromorphic computing substrate
In this study, we present a highly configurable neuromorphic computing substrate and use it for emulating several types of neural networks. At the heart of this system lies a mixed-signal chip, with analog implementations of neurons and synapses and digital transmission of action potentials. Major advantages of this emulation device, which has been explicitly designed as a universal neural network emulator, are its inherent parallelism and high acceleration factor compared to conventional computers. Its configurability allows the realization of almost arbitrary network topologies and the use of widely varied neuronal and synaptic parameters. Fixed-pattern noise inherent to analog circuitry is reduced by calibration routines. An integrated development environment allows neuroscientists to operate the device without any prior knowledge of neuromorphic circuit design. As a showcase for the capabilities of the system, we describe the successful emulation of six different neural networks which cover a broad spectrum of both structure and functionality
Stochasticity from function -- why the Bayesian brain may need no noise
An increasing body of evidence suggests that the trial-to-trial variability
of spiking activity in the brain is not mere noise, but rather the reflection
of a sampling-based encoding scheme for probabilistic computing. Since the
precise statistical properties of neural activity are important in this
context, many models assume an ad-hoc source of well-behaved, explicit noise,
either on the input or on the output side of single neuron dynamics, most often
assuming an independent Poisson process in either case. However, these
assumptions are somewhat problematic: neighboring neurons tend to share
receptive fields, rendering both their input and their output correlated; at
the same time, neurons are known to behave largely deterministically, as a
function of their membrane potential and conductance. We suggest that spiking
neural networks may, in fact, have no need for noise to perform sampling-based
Bayesian inference. We study analytically the effect of auto- and
cross-correlations in functionally Bayesian spiking networks and demonstrate
how their effect translates to synaptic interaction strengths, rendering them
controllable through synaptic plasticity. This allows even small ensembles of
interconnected deterministic spiking networks to simultaneously and
co-dependently shape their output activity through learning, enabling them to
perform complex Bayesian computation without any need for noise, which we
demonstrate in silico, both in classical simulation and in neuromorphic
emulation. These results close a gap between the abstract models and the
biology of functionally Bayesian spiking networks, effectively reducing the
architectural constraints imposed on physical neural substrates required to
perform probabilistic computing, be they biological or artificial
- …