15,478 research outputs found
Independent Component Analysis in Spiking Neurons
Although models based on independent component analysis (ICA) have been successful in explaining various properties of sensory coding in the cortex, it remains unclear how networks of spiking neurons using realistic plasticity rules can realize such computation. Here, we propose a biologically plausible mechanism for ICA-like learning with spiking neurons. Our model combines spike-timing dependent plasticity and synaptic scaling with an intrinsic plasticity rule that regulates neuronal excitability to maximize information transmission. We show that a stochastically spiking neuron learns one independent component for inputs encoded either as rates or using spike-spike correlations. Furthermore, different independent components can be recovered, when the activity of different neurons is decorrelated by adaptive lateral inhibition
Biologically plausible deep learning -- but how far can we go with shallow networks?
Training deep neural networks with the error backpropagation algorithm is
considered implausible from a biological perspective. Numerous recent
publications suggest elaborate models for biologically plausible variants of
deep learning, typically defining success as reaching around 98% test accuracy
on the MNIST data set. Here, we investigate how far we can go on digit (MNIST)
and object (CIFAR10) classification with biologically plausible, local learning
rules in a network with one hidden layer and a single readout layer. The hidden
layer weights are either fixed (random or random Gabor filters) or trained with
unsupervised methods (PCA, ICA or Sparse Coding) that can be implemented by
local learning rules. The readout layer is trained with a supervised, local
learning rule. We first implement these models with rate neurons. This
comparison reveals, first, that unsupervised learning does not lead to better
performance than fixed random projections or Gabor filters for large hidden
layers. Second, networks with localized receptive fields perform significantly
better than networks with all-to-all connectivity and can reach backpropagation
performance on MNIST. We then implement two of the networks - fixed, localized,
random & random Gabor filters in the hidden layer - with spiking leaky
integrate-and-fire neurons and spike timing dependent plasticity to train the
readout layer. These spiking models achieve > 98.2% test accuracy on MNIST,
which is close to the performance of rate networks with one hidden layer
trained with backpropagation. The performance of our shallow network models is
comparable to most current biologically plausible models of deep learning.
Furthermore, our results with a shallow spiking network provide an important
reference and suggest the use of datasets other than MNIST for testing the
performance of future models of biologically plausible deep learning.Comment: 14 pages, 4 figure
Multiplicative Auditory Spatial Receptive Fields Created by a Hierarchy of Population Codes
A multiplicative combination of tuning to interaural time difference (ITD) and interaural level difference (ILD) contributes to the generation of spatially selective auditory neurons in the owl's midbrain. Previous analyses of multiplicative responses in the owl have not taken into consideration the frequency-dependence of ITD and ILD cues that occur under natural listening conditions. Here, we present a model for the responses of ITD- and ILD-sensitive neurons in the barn owl's inferior colliculus which satisfies constraints raised by experimental data on frequency convergence, multiplicative interaction of ITD and ILD, and response properties of afferent neurons. We propose that multiplication between ITD- and ILD-dependent signals occurs only within frequency channels and that frequency integration occurs using a linear-threshold mechanism. The model reproduces the experimentally observed nonlinear responses to ITD and ILD in the inferior colliculus, with greater accuracy than previous models. We show that linear-threshold frequency integration allows the system to represent multiple sound sources with natural sound localization cues, whereas multiplicative frequency integration does not. Nonlinear responses in the owl's inferior colliculus can thus be generated using a combination of cellular and network mechanisms, showing that multiple elements of previous theories can be combined in a single system
Spiking Neural Networks for Inference and Learning: A Memristor-based Design Perspective
On metrics of density and power efficiency, neuromorphic technologies have
the potential to surpass mainstream computing technologies in tasks where
real-time functionality, adaptability, and autonomy are essential. While
algorithmic advances in neuromorphic computing are proceeding successfully, the
potential of memristors to improve neuromorphic computing have not yet born
fruit, primarily because they are often used as a drop-in replacement to
conventional memory. However, interdisciplinary approaches anchored in machine
learning theory suggest that multifactor plasticity rules matching neural and
synaptic dynamics to the device capabilities can take better advantage of
memristor dynamics and its stochasticity. Furthermore, such plasticity rules
generally show much higher performance than that of classical Spike Time
Dependent Plasticity (STDP) rules. This chapter reviews the recent development
in learning with spiking neural network models and their possible
implementation with memristor-based hardware
Feature detection using spikes: the greedy approach
A goal of low-level neural processes is to build an efficient code extracting
the relevant information from the sensory input. It is believed that this is
implemented in cortical areas by elementary inferential computations
dynamically extracting the most likely parameters corresponding to the sensory
signal. We explore here a neuro-mimetic feed-forward model of the primary
visual area (VI) solving this problem in the case where the signal may be
described by a robust linear generative model. This model uses an over-complete
dictionary of primitives which provides a distributed probabilistic
representation of input features. Relying on an efficiency criterion, we derive
an algorithm as an approximate solution which uses incremental greedy inference
processes. This algorithm is similar to 'Matching Pursuit' and mimics the
parallel architecture of neural computations. We propose here a simple
implementation using a network of spiking integrate-and-fire neurons which
communicate using lateral interactions. Numerical simulations show that this
Sparse Spike Coding strategy provides an efficient model for representing
visual data from a set of natural images. Even though it is simplistic, this
transformation of spatial data into a spatio-temporal pattern of binary events
provides an accurate description of some complex neural patterns observed in
the spiking activity of biological neural networks.Comment: This work links Matching Pursuit with bayesian inference by providing
the underlying hypotheses (linear model, uniform prior, gaussian noise
model). A parallel with the parallel and event-based nature of neural
computations is explored and we show application to modelling Primary Visual
Cortex / image processsing.
http://incm.cnrs-mrs.fr/perrinet/dynn/LaurentPerrinet/Publications/Perrinet04tau
Intrinsically-generated fluctuating activity in excitatory-inhibitory networks
Recurrent networks of non-linear units display a variety of dynamical regimes
depending on the structure of their synaptic connectivity. A particularly
remarkable phenomenon is the appearance of strongly fluctuating, chaotic
activity in networks of deterministic, but randomly connected rate units. How
this type of intrinsi- cally generated fluctuations appears in more realistic
networks of spiking neurons has been a long standing question. To ease the
comparison between rate and spiking networks, recent works investigated the
dynami- cal regimes of randomly-connected rate networks with segregated
excitatory and inhibitory populations, and firing rates constrained to be
positive. These works derived general dynamical mean field (DMF) equations
describing the fluctuating dynamics, but solved these equations only in the
case of purely inhibitory networks. Using a simplified excitatory-inhibitory
architecture in which DMF equations are more easily tractable, here we show
that the presence of excitation qualitatively modifies the fluctuating activity
compared to purely inhibitory networks. In presence of excitation,
intrinsically generated fluctuations induce a strong increase in mean firing
rates, a phenomenon that is much weaker in purely inhibitory networks.
Excitation moreover induces two different fluctuating regimes: for moderate
overall coupling, recurrent inhibition is sufficient to stabilize fluctuations,
for strong coupling, firing rates are stabilized solely by the upper bound
imposed on activity, even if inhibition is stronger than excitation. These
results extend to more general network architectures, and to rate networks
receiving noisy inputs mimicking spiking activity. Finally, we show that
signatures of the second dynamical regime appear in networks of
integrate-and-fire neurons
Nonlinear Hebbian learning as a unifying principle in receptive field formation
The development of sensory receptive fields has been modeled in the past by a
variety of models including normative models such as sparse coding or
independent component analysis and bottom-up models such as spike-timing
dependent plasticity or the Bienenstock-Cooper-Munro model of synaptic
plasticity. Here we show that the above variety of approaches can all be
unified into a single common principle, namely Nonlinear Hebbian Learning. When
Nonlinear Hebbian Learning is applied to natural images, receptive field shapes
were strongly constrained by the input statistics and preprocessing, but
exhibited only modest variation across different choices of nonlinearities in
neuron models or synaptic plasticity rules. Neither overcompleteness nor sparse
network activity are necessary for the development of localized receptive
fields. The analysis of alternative sensory modalities such as auditory models
or V2 development lead to the same conclusions. In all examples, receptive
fields can be predicted a priori by reformulating an abstract model as
nonlinear Hebbian learning. Thus nonlinear Hebbian learning and natural
statistics can account for many aspects of receptive field formation across
models and sensory modalities
Synchronous Behavior of Two Coupled Electronic Neurons
We report on experimental studies of synchronization phenomena in a pair of
analog electronic neurons (ENs). The ENs were designed to reproduce the
observed membrane voltage oscillations of isolated biological neurons from the
stomatogastric ganglion of the California spiny lobster Panulirus interruptus.
The ENs are simple analog circuits which integrate four dimensional
differential equations representing fast and slow subcellular mechanisms that
produce the characteristic regular/chaotic spiking-bursting behavior of these
cells. In this paper we study their dynamical behavior as we couple them in the
same configurations as we have done for their counterpart biological neurons.
The interconnections we use for these neural oscillators are both direct
electrical connections and excitatory and inhibitory chemical connections: each
realized by analog circuitry and suggested by biological examples. We provide
here quantitative evidence that the ENs and the biological neurons behave
similarly when coupled in the same manner. They each display well defined
bifurcations in their mutual synchronization and regularization. We report
briefly on an experiment on coupled biological neurons and four dimensional ENs
which provides further ground for testing the validity of our numerical and
electronic models of individual neural behavior. Our experiments as a whole
present interesting new examples of regularization and synchronization in
coupled nonlinear oscillators.Comment: 26 pages, 10 figure
- âŠ