5,181 research outputs found
Nonlinear Hebbian learning as a unifying principle in receptive field formation
The development of sensory receptive fields has been modeled in the past by a
variety of models including normative models such as sparse coding or
independent component analysis and bottom-up models such as spike-timing
dependent plasticity or the Bienenstock-Cooper-Munro model of synaptic
plasticity. Here we show that the above variety of approaches can all be
unified into a single common principle, namely Nonlinear Hebbian Learning. When
Nonlinear Hebbian Learning is applied to natural images, receptive field shapes
were strongly constrained by the input statistics and preprocessing, but
exhibited only modest variation across different choices of nonlinearities in
neuron models or synaptic plasticity rules. Neither overcompleteness nor sparse
network activity are necessary for the development of localized receptive
fields. The analysis of alternative sensory modalities such as auditory models
or V2 development lead to the same conclusions. In all examples, receptive
fields can be predicted a priori by reformulating an abstract model as
nonlinear Hebbian learning. Thus nonlinear Hebbian learning and natural
statistics can account for many aspects of receptive field formation across
models and sensory modalities
Slowness: An Objective for Spike-Timing-Dependent Plasticity?
Slow Feature Analysis (SFA) is an efficient algorithm for
learning input-output functions that extract the most slowly varying features from a quickly varying signal. It
has been successfully applied to the unsupervised learning
of translation-, rotation-, and other invariances in a
model of the visual system, to the learning of complex cell
receptive fields, and, combined with a sparseness
objective, to the self-organized formation of place cells
in a model of the hippocampus.
In order to arrive at a biologically more plausible implementation of this learning rule, we consider analytically how SFA could be realized in simple linear continuous and spiking model neurons. It turns out that for the continuous model neuron SFA can be implemented by means of a modified version of standard Hebbian learning. In this framework we provide a connection to the trace learning rule for invariance learning. We then show that for Poisson neurons spike-timing-dependent plasticity (STDP) with a specific learning window can learn the same weight distribution as SFA. Surprisingly, we find that the appropriate learning rule reproduces the typical STDP learning window. The shape as well as the timescale are in good agreement with what has been measured experimentally. This offers a completely novel interpretation for the functional role of spike-timing-dependent plasticity in physiological neurons
Dynamical and Statistical Criticality in a Model of Neural Tissue
For the nervous system to work at all, a delicate balance of excitation and
inhibition must be achieved. However, when such a balance is sought by global
strategies, only few modes remain balanced close to instability, and all other
modes are strongly stable. Here we present a simple model of neural tissue in
which this balance is sought locally by neurons following `anti-Hebbian'
behavior: {\sl all} degrees of freedom achieve a close balance of excitation
and inhibition and become "critical" in the dynamical sense. At long
timescales, the modes of our model oscillate around the instability line, so an
extremely complex "breakout" dynamics ensues in which different modes of the
system oscillate between prominence and extinction. We show the system develops
various anomalous statistical behaviours and hence becomes self-organized
critical in the statistical sense
Recommended from our members
Towards a Unified Model of Language Acquisition
In this theoretical paper, we first review and rebut standard criticisms against distributional approaches to language acquisition. We then present two closely-related models that use distributional analysis. The first deals with the acquisition of vocabulary, the second with grammatical development. We show how these two models can be combined with a semantic network grown using Hebbian learning, and briefly illustrate the advantages of this combination. An important feature of this hybrid system is that it combines two different types of distributional learning, the first based on order, and the second based on co-occurrences within a context
Hybrid Neural Networks for Frequency Estimation of Unevenly Sampled Data
In this paper we present a hybrid system composed by a neural network based
estimator system and genetic algorithms. It uses an unsupervised Hebbian
nonlinear neural algorithm to extract the principal components which, in turn,
are used by the MUSIC frequency estimator algorithm to extract the frequencies.
We generalize this method to avoid an interpolation preprocessing step and to
improve the performance by using a new stop criterion to avoid overfitting.
Furthermore, genetic algorithms are used to optimize the neural net weight
initialization. The experimental results are obtained comparing our methodology
with the others known in literature on a Cepheid star light curve.Comment: 5 pages, to appear in the proceedings of IJCNN 99, IEEE Press, 199
- âŚ