28,437 research outputs found
Order-Based Representation in Random Networks of Cortical Neurons
The wide range of time scales involved in neural excitability and synaptic transmission might lead to ongoing change in the temporal structure of responses to recurring stimulus presentations on a trial-to-trial basis. This is probably the most severe biophysical constraint on putative time-based primitives of stimulus representation in neuronal networks. Here we show that in spontaneously developing large-scale random networks of cortical neurons in vitro the order in which neurons are recruited following each stimulus is a naturally emerging representation primitive that is invariant to significant temporal changes in spike times. With a relatively small number of randomly sampled neurons, the information about stimulus position is fully retrievable from the recruitment order. The effective connectivity that makes order-based representation invariant to time warping is characterized by the existence of stations through which activity is required to pass in order to propagate further into the network. This study uncovers a simple invariant in a noisy biological network in vitro; its applicability under in vivo constraints remains to be seen
A feedback model of perceptual learning and categorisation
Top-down, feedback, influences are known to have significant effects on visual information processing. Such influences are also likely to affect perceptual learning. This article employs a computational model of the cortical region interactions underlying visual perception to investigate possible influences of top-down information on learning. The results suggest that feedback could bias the way in which perceptual stimuli are categorised and could also facilitate the learning of sub-ordinate level representations suitable for object identification and perceptual expertise
Dynamical Entropy Production in Spiking Neuron Networks in the Balanced State
We demonstrate deterministic extensive chaos in the dynamics of large sparse
networks of theta neurons in the balanced state. The analysis is based on
numerically exact calculations of the full spectrum of Lyapunov exponents, the
entropy production rate and the attractor dimension. Extensive chaos is found
in inhibitory networks and becomes more intense when an excitatory population
is included. We find a strikingly high rate of entropy production that would
limit information representation in cortical spike patterns to the immediate
stimulus response.Comment: 4 pages, 4 figure
Decorrelation of neural-network activity by inhibitory feedback
Correlations in spike-train ensembles can seriously impair the encoding of
information by their spatio-temporal structure. An inevitable source of
correlation in finite neural networks is common presynaptic input to pairs of
neurons. Recent theoretical and experimental studies demonstrate that spike
correlations in recurrent neural networks are considerably smaller than
expected based on the amount of shared presynaptic input. By means of a linear
network model and simulations of networks of leaky integrate-and-fire neurons,
we show that shared-input correlations are efficiently suppressed by inhibitory
feedback. To elucidate the effect of feedback, we compare the responses of the
intact recurrent network and systems where the statistics of the feedback
channel is perturbed. The suppression of spike-train correlations and
population-rate fluctuations by inhibitory feedback can be observed both in
purely inhibitory and in excitatory-inhibitory networks. The effect is fully
understood by a linear theory and becomes already apparent at the macroscopic
level of the population averaged activity. At the microscopic level,
shared-input correlations are suppressed by spike-train correlations: In purely
inhibitory networks, they are canceled by negative spike-train correlations. In
excitatory-inhibitory networks, spike-train correlations are typically
positive. Here, the suppression of input correlations is not a result of the
mere existence of correlations between excitatory (E) and inhibitory (I)
neurons, but a consequence of a particular structure of correlations among the
three possible pairings (EE, EI, II)
A generative spike train model with time-structured higher order correlations
Emerging technologies are revealing the spiking activity in ever larger
neural ensembles. Frequently, this spiking is far from independent, with
correlations in the spike times of different cells. Understanding how such
correlations impact the dynamics and function of neural ensembles remains an
important open problem. Here we describe a new, generative model for correlated
spike trains that can exhibit many of the features observed in data. Extending
prior work in mathematical finance, this generalized thinning and shift (GTaS)
model creates marginally Poisson spike trains with diverse temporal correlation
structures. We give several examples which highlight the model's flexibility
and utility. For instance, we use it to examine how a neural network responds
to highly structured patterns of inputs. We then show that the GTaS model is
analytically tractable, and derive cumulant densities of all orders in terms of
model parameters. The GTaS framework can therefore be an important tool in the
experimental and theoretical exploration of neural dynamics
Nonlinear Hebbian learning as a unifying principle in receptive field formation
The development of sensory receptive fields has been modeled in the past by a
variety of models including normative models such as sparse coding or
independent component analysis and bottom-up models such as spike-timing
dependent plasticity or the Bienenstock-Cooper-Munro model of synaptic
plasticity. Here we show that the above variety of approaches can all be
unified into a single common principle, namely Nonlinear Hebbian Learning. When
Nonlinear Hebbian Learning is applied to natural images, receptive field shapes
were strongly constrained by the input statistics and preprocessing, but
exhibited only modest variation across different choices of nonlinearities in
neuron models or synaptic plasticity rules. Neither overcompleteness nor sparse
network activity are necessary for the development of localized receptive
fields. The analysis of alternative sensory modalities such as auditory models
or V2 development lead to the same conclusions. In all examples, receptive
fields can be predicted a priori by reformulating an abstract model as
nonlinear Hebbian learning. Thus nonlinear Hebbian learning and natural
statistics can account for many aspects of receptive field formation across
models and sensory modalities
Why Neurons Have Thousands of Synapses, A Theory of Sequence Memory in Neocortex
Neocortical neurons have thousands of excitatory synapses. It is a mystery
how neurons integrate the input from so many synapses and what kind of
large-scale network behavior this enables. It has been previously proposed that
non-linear properties of dendrites enable neurons to recognize multiple
patterns. In this paper we extend this idea by showing that a neuron with
several thousand synapses arranged along active dendrites can learn to
accurately and robustly recognize hundreds of unique patterns of cellular
activity, even in the presence of large amounts of noise and pattern variation.
We then propose a neuron model where some of the patterns recognized by a
neuron lead to action potentials and define the classic receptive field of the
neuron, whereas the majority of the patterns recognized by a neuron act as
predictions by slightly depolarizing the neuron without immediately generating
an action potential. We then present a network model based on neurons with
these properties and show that the network learns a robust model of time-based
sequences. Given the similarity of excitatory neurons throughout the neocortex
and the importance of sequence memory in inference and behavior, we propose
that this form of sequence memory is a universal property of neocortical
tissue. We further propose that cellular layers in the neocortex implement
variations of the same sequence memory algorithm to achieve different aspects
of inference and behavior. The neuron and network models we introduce are
robust over a wide range of parameters as long as the network uses a sparse
distributed code of cellular activations. The sequence capacity of the network
scales linearly with the number of synapses on each neuron. Thus neurons need
thousands of synapses to learn the many temporal patterns in sensory stimuli
and motor sequences.Comment: Submitted for publicatio
- …