803 research outputs found
Mathematical Formulations of Hebbian Learning
Several formulations of correlation-based Hebbian learning are reviewed. On the presynaptic side, activity is described either by a firing rate or by presynaptic spike arrival. The state of the postsynaptic neuron can be described by its membrane potential, its firing rate, or the timing of backpropagating action potentials (BPAPs). It is shown that all of the above formulations can be derived from the point of view of an expansion. In the absence of BPAPs potentials, it is natural to correlate presynaptic spikes with the postsynaptic membrane potential. Time windows of spike time dependent plasticity arise naturally, if the timing of postsynaptic spikes is available at the site of the synapse as it is the case in the presence of BPAPs. With an appropriate choice of parameters, Hebbian synaptic plasticity has intrinsic normalization properties that stabilizes postsynaptic firing rates and leads to subtractive weight normalization
Extranoematic artifacts: neural systems in space and topology
During the past several decades, the evolution in architecture and engineering went through several stages of exploration of form. While the procedures of generating the form have varied from using physical analogous form-finding computation to engaging the form with simulated dynamic forces in digital environment, the self-generation and organization of form has always been the goal. this thesis further intend to contribute to self-organizational capacity in Architecture.
The subject of investigation is the rationalizing of geometry from an unorganized point cloud by using learning neural networks. Furthermore, the focus is oriented upon aspects of efficient construction of generated topology. Neural network is connected with constraining
properties, which adjust the members of the topology into predefined number of sizes while minimizing the error of deviation from the original form. The resulted algorithm is applied in several different scenarios of construction, highlighting the possibilities and versatility of this
method
Distributed synaptic weights in a LIF neural network and learning rules
Leaky integrate-and-fire (LIF) models are mean-field limits, with a large
number of neurons, used to describe neural networks. We consider inhomogeneous
networks structured by a connec-tivity parameter (strengths of the synaptic
weights) with the effect of processing the input current with different
intensities. We first study the properties of the network activity depending on
the distribution of synaptic weights and in particular its discrimination
capacity. Then, we consider simple learning rules and determine the synaptic
weight distribution it generates. We outline the role of noise as a selection
principle and the capacity to memorized a learned signal.Comment: Physica D: Nonlinear Phenomena, Elsevier, 201
Eligibility Traces and Plasticity on Behavioral Time Scales: Experimental Support of neoHebbian Three-Factor Learning Rules
Most elementary behaviors such as moving the arm to grasp an object or
walking into the next room to explore a museum evolve on the time scale of
seconds; in contrast, neuronal action potentials occur on the time scale of a
few milliseconds. Learning rules of the brain must therefore bridge the gap
between these two different time scales.
Modern theories of synaptic plasticity have postulated that the co-activation
of pre- and postsynaptic neurons sets a flag at the synapse, called an
eligibility trace, that leads to a weight change only if an additional factor
is present while the flag is set. This third factor, signaling reward,
punishment, surprise, or novelty, could be implemented by the phasic activity
of neuromodulators or specific neuronal inputs signaling special events. While
the theoretical framework has been developed over the last decades,
experimental evidence in support of eligibility traces on the time scale of
seconds has been collected only during the last few years.
Here we review, in the context of three-factor rules of synaptic plasticity,
four key experiments that support the role of synaptic eligibility traces in
combination with a third factor as a biological implementation of neoHebbian
three-factor learning rules
Logarithmic distributions prove that intrinsic learning is Hebbian
In this paper, we present data for the lognormal distributions of spike
rates, synaptic weights and intrinsic excitability (gain) for neurons in
various brain areas, such as auditory or visual cortex, hippocampus,
cerebellum, striatum, midbrain nuclei. We find a remarkable consistency of
heavy-tailed, specifically lognormal, distributions for rates, weights and
gains in all brain areas examined. The difference between strongly recurrent
and feed-forward connectivity (cortex vs. striatum and cerebellum),
neurotransmitter (GABA (striatum) or glutamate (cortex)) or the level of
activation (low in cortex, high in Purkinje cells and midbrain nuclei) turns
out to be irrelevant for this feature. Logarithmic scale distribution of
weights and gains appears to be a general, functional property in all cases
analyzed. We then created a generic neural model to investigate adaptive
learning rules that create and maintain lognormal distributions. We
conclusively demonstrate that not only weights, but also intrinsic gains, need
to have strong Hebbian learning in order to produce and maintain the
experimentally attested distributions. This provides a solution to the
long-standing question about the type of plasticity exhibited by intrinsic
excitability
The Role of Constraints in Hebbian Learning
Models of unsupervised, correlation-based (Hebbian) synaptic plasticity are typically unstable: either all synapses grow until each reaches the maximum allowed strength, or all synapses decay to zero strength. A common method of avoiding these outcomes is to use a constraint that conserves or limits the total synaptic strength over a cell. We study the dynamic effects of such constraints.
Two methods of enforcing a constraint are distinguished, multiplicative and subtractive. For otherwise linear learning rules, multiplicative enforcement of a constraint results in dynamics that converge to the principal eigenvector of the operator determining unconstrained synaptic development. Subtractive enforcement, in contrast, typically leads to a final state in which almost all synaptic strengths reach either the maximum or minimum allowed value. This final state is often dominated by weight configurations other than the principal eigenvector of the unconstrained operator. Multiplicative enforcement yields a “graded” receptive field in which most mutually correlated inputs are represented, whereas subtractive enforcement yields a receptive field that is “sharpened” to a subset of maximally correlated inputs. If two equivalent input populations (e.g., two eyes) innervate a common target, multiplicative enforcement prevents their segregation (ocular dominance segregation) when the two populations are weakly correlated; whereas subtractive enforcement allows segregation under these circumstances.
These results may be used to understand constraints both over output cells and over input cells. A variety of rules that can implement constrained dynamics are discussed
Generating functionals for computational intelligence: the Fisher information as an objective function for self-limiting Hebbian learning rules
Generating functionals may guide the evolution of a dynamical system and
constitute a possible route for handling the complexity of neural networks as
relevant for computational intelligence. We propose and explore a new objective
function, which allows to obtain plasticity rules for the afferent synaptic
weights. The adaption rules are Hebbian, self-limiting, and result from the
minimization of the Fisher information with respect to the synaptic flux. We
perform a series of simulations examining the behavior of the new learning
rules in various circumstances. The vector of synaptic weights aligns with the
principal direction of input activities, whenever one is present. A linear
discrimination is performed when there are two or more principal directions;
directions having bimodal firing-rate distributions, being characterized by a
negative excess kurtosis, are preferred. We find robust performance and full
homeostatic adaption of the synaptic weights results as a by-product of the
synaptic flux minimization. This self-limiting behavior allows for stable
online learning for arbitrary durations. The neuron acquires new information
when the statistics of input activities is changed at a certain point of the
simulation, showing however, a distinct resilience to unlearn previously
acquired knowledge. Learning is fast when starting with randomly drawn synaptic
weights and substantially slower when the synaptic weights are already fully
adapted
- …