5,482 research outputs found
Transient Information Flow in a Network of Excitatory and Inhibitory Model Neurons: Role of Noise and Signal Autocorrelation
We investigate the performance of sparsely-connected networks of
integrate-and-fire neurons for ultra-short term information processing. We
exploit the fact that the population activity of networks with balanced
excitation and inhibition can switch from an oscillatory firing regime to a
state of asynchronous irregular firing or quiescence depending on the rate of
external background spikes.
We find that in terms of information buffering the network performs best for
a moderate, non-zero, amount of noise. Analogous to the phenomenon of
stochastic resonance the performance decreases for higher and lower noise
levels. The optimal amount of noise corresponds to the transition zone between
a quiescent state and a regime of stochastic dynamics. This provides a
potential explanation on the role of non-oscillatory population activity in a
simplified model of cortical micro-circuits.Comment: 27 pages, 7 figures, to appear in J. Physiology (Paris) Vol. 9
Noise-enhanced computation in a model of a cortical column
Varied sensory systems use noise in order to enhance detection of weak
signals. It has been conjectured in the literature that this effect, known as
stochastic resonance, may take place in central cognitive processes such as the
memory retrieval of arithmetical multiplication. We show in a simplified model
of cortical tissue, that complex arithmetical calculations can be carried out
and are enhanced in the presence of a stochastic background. The performance is
shown to be positively correlated to the susceptibility of the network, defined
as its sensitivity to a variation of the mean of its inputs. For nontrivial
arithmetic tasks such as multiplication, stochastic resonance is an emergent
property of the microcircuitry of the model network
Rescaling, thinning or complementing? On goodness-of-fit procedures for point process models and Generalized Linear Models
Generalized Linear Models (GLMs) are an increasingly popular framework for
modeling neural spike trains. They have been linked to the theory of stochastic
point processes and researchers have used this relation to assess
goodness-of-fit using methods from point-process theory, e.g. the
time-rescaling theorem. However, high neural firing rates or coarse
discretization lead to a breakdown of the assumptions necessary for this
connection. Here, we show how goodness-of-fit tests from point-process theory
can still be applied to GLMs by constructing equivalent surrogate point
processes out of time-series observations. Furthermore, two additional tests
based on thinning and complementing point processes are introduced. They
augment the instruments available for checking model adequacy of point
processes as well as discretized models.Comment: 9 pages, to appear in NIPS 2010 (Neural Information Processing
Systems), corrected missing referenc
Nonlinear Hebbian learning as a unifying principle in receptive field formation
The development of sensory receptive fields has been modeled in the past by a
variety of models including normative models such as sparse coding or
independent component analysis and bottom-up models such as spike-timing
dependent plasticity or the Bienenstock-Cooper-Munro model of synaptic
plasticity. Here we show that the above variety of approaches can all be
unified into a single common principle, namely Nonlinear Hebbian Learning. When
Nonlinear Hebbian Learning is applied to natural images, receptive field shapes
were strongly constrained by the input statistics and preprocessing, but
exhibited only modest variation across different choices of nonlinearities in
neuron models or synaptic plasticity rules. Neither overcompleteness nor sparse
network activity are necessary for the development of localized receptive
fields. The analysis of alternative sensory modalities such as auditory models
or V2 development lead to the same conclusions. In all examples, receptive
fields can be predicted a priori by reformulating an abstract model as
nonlinear Hebbian learning. Thus nonlinear Hebbian learning and natural
statistics can account for many aspects of receptive field formation across
models and sensory modalities
Biologically plausible deep learning -- but how far can we go with shallow networks?
Training deep neural networks with the error backpropagation algorithm is
considered implausible from a biological perspective. Numerous recent
publications suggest elaborate models for biologically plausible variants of
deep learning, typically defining success as reaching around 98% test accuracy
on the MNIST data set. Here, we investigate how far we can go on digit (MNIST)
and object (CIFAR10) classification with biologically plausible, local learning
rules in a network with one hidden layer and a single readout layer. The hidden
layer weights are either fixed (random or random Gabor filters) or trained with
unsupervised methods (PCA, ICA or Sparse Coding) that can be implemented by
local learning rules. The readout layer is trained with a supervised, local
learning rule. We first implement these models with rate neurons. This
comparison reveals, first, that unsupervised learning does not lead to better
performance than fixed random projections or Gabor filters for large hidden
layers. Second, networks with localized receptive fields perform significantly
better than networks with all-to-all connectivity and can reach backpropagation
performance on MNIST. We then implement two of the networks - fixed, localized,
random & random Gabor filters in the hidden layer - with spiking leaky
integrate-and-fire neurons and spike timing dependent plasticity to train the
readout layer. These spiking models achieve > 98.2% test accuracy on MNIST,
which is close to the performance of rate networks with one hidden layer
trained with backpropagation. The performance of our shallow network models is
comparable to most current biologically plausible models of deep learning.
Furthermore, our results with a shallow spiking network provide an important
reference and suggest the use of datasets other than MNIST for testing the
performance of future models of biologically plausible deep learning.Comment: 14 pages, 4 figure
Synaptic shot noise and conductance fluctuations affect the membrane voltage with equal significance
The subthresholdmembranevoltage of a neuron in active cortical tissue is
a fluctuating quantity with a distribution that reflects the firing statistics
of the presynaptic population. It was recently found that conductancebased
synaptic drive can lead to distributions with a significant skew.
Here it is demonstrated that the underlying shot noise caused by Poissonian
spike arrival also skews the membrane distribution, but in the opposite
sense. Using a perturbative method, we analyze the effects of shot
noise on the distribution of synaptic conductances and calculate the consequent
voltage distribution. To first order in the perturbation theory, the
voltage distribution is a gaussian modulated by a prefactor that captures
the skew. The gaussian component is identical to distributions derived
using current-based models with an effective membrane time constant.
The well-known effective-time-constant approximation can therefore be
identified as the leading-order solution to the full conductance-based
model. The higher-order modulatory prefactor containing the skew comprises
terms due to both shot noise and conductance fluctuations. The
diffusion approximation misses these shot-noise effects implying that
analytical approaches such as the Fokker-Planck equation or simulation
with filtered white noise cannot be used to improve on the gaussian approximation.
It is further demonstrated that quantities used for fitting
theory to experiment, such as the voltage mean and variance, are robust
against these non-Gaussian effects. The effective-time-constant approximation
is therefore relevant to experiment and provides a simple analytic
base on which other pertinent biological details may be added
- …