1,323 research outputs found
Storage Capacity of Extremely Diluted Hopfield Model
The storage capacity of the extremely diluted Hopfield Model is studied by
using Monte Carlo techniques. In this work, instead of diluting the synapses
according to a given distribution, the dilution of the synapses is obtained
systematically by retaining only the synapses with dominant contributions. It
is observed that by using the prescribed dilution method the critical storage
capacity of the system increases with decreasing number of synapses per neuron
reaching almost the value obtained from mean-field calculations. It is also
shown that the increase of the storage capacity of the diluted system depends
on the storage capacity of the fully connected Hopfield Model and the fraction
of the diluted synapses.Comment: Latex, 14 pages, 4 eps figure
Spontaneous structure formation in a network of chaotic units with variable connection strengths
As a model of temporally evolving networks, we consider a globally coupled
logistic map with variable connection weights. The model exhibits
self-organization of network structure, reflected by the collective behavior of
units. Structural order emerges even without any inter-unit synchronization of
dynamics. Within this structure, units spontaneously separate into two groups
whose distinguishing feature is that the first group possesses many
outwardly-directed connections to the second group, while the second group
possesses only few outwardly-directed connections to the first. The relevance
of the results to structure formation in neural networks is briefly discussed.Comment: 4 pages, 3 figures, REVTe
Memory Aware Synapses: Learning what (not) to forget
Humans can learn in a continuous manner. Old rarely utilized knowledge can be
overwritten by new incoming information while important, frequently used
knowledge is prevented from being erased. In artificial learning systems,
lifelong learning so far has focused mainly on accumulating knowledge over
tasks and overcoming catastrophic forgetting. In this paper, we argue that,
given the limited model capacity and the unlimited new information to be
learned, knowledge has to be preserved or erased selectively. Inspired by
neuroplasticity, we propose a novel approach for lifelong learning, coined
Memory Aware Synapses (MAS). It computes the importance of the parameters of a
neural network in an unsupervised and online manner. Given a new sample which
is fed to the network, MAS accumulates an importance measure for each parameter
of the network, based on how sensitive the predicted output function is to a
change in this parameter. When learning a new task, changes to important
parameters can then be penalized, effectively preventing important knowledge
related to previous tasks from being overwritten. Further, we show an
interesting connection between a local version of our method and Hebb's
rule,which is a model for the learning process in the brain. We test our method
on a sequence of object recognition tasks and on the challenging problem of
learning an embedding for predicting triplets.
We show state-of-the-art performance and, for the first time, the ability to
adapt the importance of the parameters based on unlabeled data towards what the
network needs (not) to forget, which may vary depending on test conditions.Comment: ECCV 201
Effects of Synaptic and Myelin Plasticity on Learning in a Network of Kuramoto Phase Oscillators
Models of learning typically focus on synaptic plasticity. However, learning
is the result of both synaptic and myelin plasticity. Specifically, synaptic
changes often co-occur and interact with myelin changes, leading to complex
dynamic interactions between these processes. Here, we investigate the
implications of these interactions for the coupling behavior of a system of
Kuramoto oscillators. To that end, we construct a fully connected,
one-dimensional ring network of phase oscillators whose coupling strength
(reflecting synaptic strength) as well as conduction velocity (reflecting
myelination) are each regulated by a Hebbian learning rule. We evaluate the
behavior of the system in terms of structural (pairwise connection strength and
conduction velocity) and functional connectivity (local and global
synchronization behavior). We find that for conditions in which a system
limited to synaptic plasticity develops two distinct clusters both structurally
and functionally, additional adaptive myelination allows for functional
communication across these structural clusters. Hence, dynamic conduction
velocity permits the functional integration of structurally segregated
clusters. Our results confirm that network states following learning may be
different when myelin plasticity is considered in addition to synaptic
plasticity, pointing towards the relevance of integrating both factors in
computational models of learning.Comment: 39 pages, 15 figures This work is submitted in Chaos: An
Interdisciplinary Journal of Nonlinear Scienc
Supervised Learning in Multilayer Spiking Neural Networks
The current article introduces a supervised learning algorithm for multilayer
spiking neural networks. The algorithm presented here overcomes some
limitations of existing learning algorithms as it can be applied to neurons
firing multiple spikes and it can in principle be applied to any linearisable
neuron model. The algorithm is applied successfully to various benchmarks, such
as the XOR problem and the Iris data set, as well as complex classifications
problems. The simulations also show the flexibility of this supervised learning
algorithm which permits different encodings of the spike timing patterns,
including precise spike trains encoding.Comment: 38 pages, 4 figure
A Heterosynaptic Learning Rule for Neural Networks
In this article we intoduce a novel stochastic Hebb-like learning rule for
neural networks that is neurobiologically motivated. This learning rule
combines features of unsupervised (Hebbian) and supervised (reinforcement)
learning and is stochastic with respect to the selection of the time points
when a synapse is modified. Moreover, the learning rule does not only affect
the synapse between pre- and postsynaptic neuron, which is called homosynaptic
plasticity, but effects also further remote synapses of the pre- and
postsynaptic neuron. This more complex form of synaptic plasticity has recently
come under investigations in neurobiology and is called heterosynaptic
plasticity. We demonstrate that this learning rule is useful in training neural
networks by learning parity functions including the exclusive-or (XOR) mapping
in a multilayer feed-forward network. We find, that our stochastic learning
rule works well, even in the presence of noise. Importantly, the mean learning
time increases with the number of patterns to be learned polynomially,
indicating efficient learning.Comment: 19 page
Learning by message-passing in networks of discrete synapses
We show that a message-passing process allows to store in binary "material"
synapses a number of random patterns which almost saturates the information
theoretic bounds. We apply the learning algorithm to networks characterized by
a wide range of different connection topologies and of size comparable with
that of biological systems (e.g. ). The algorithm can be
turned into an on-line --fault tolerant-- learning protocol of potential
interest in modeling aspects of synaptic plasticity and in building
neuromorphic devices.Comment: 4 pages, 3 figures; references updated and minor corrections;
accepted in PR
Extending Feynman's Formalisms for Modelling Human Joint Action Coordination
The recently developed Life-Space-Foam approach to goal-directed human action
deals with individual actor dynamics. This paper applies the model to
characterize the dynamics of co-action by two or more actors. This dynamics is
modelled by: (i) a two-term joint action (including cognitive/motivatonal
potential and kinetic energy), and (ii) its associated adaptive path integral,
representing an infinite--dimensional neural network. Its feedback adaptation
loop has been derived from Bernstein's concepts of sensory corrections loop in
human motor control and Brooks' subsumption architectures in robotics.
Potential applications of the proposed model in human--robot interaction
research are discussed.
Keywords: Psycho--physics, human joint action, path integralsComment: 6 pages, Late
COMPLEXITY AND PREFERENCE IN ANIMALS AND MEN *
Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/73010/1/j.1749-6632.1970.tb27005.x.pd
Unstable Dynamics, Nonequilibrium Phases and Criticality in Networked Excitable Media
Here we numerically study a model of excitable media, namely, a network with
occasionally quiet nodes and connection weights that vary with activity on a
short-time scale. Even in the absence of stimuli, this exhibits unstable
dynamics, nonequilibrium phases -including one in which the global activity
wanders irregularly among attractors- and 1/f noise while the system falls into
the most irregular behavior. A net result is resilience which results in an
efficient search in the model attractors space that can explain the origin of
certain phenomenology in neural, genetic and ill-condensed matter systems. By
extensive computer simulation we also address a relation previously conjectured
between observed power-law distributions and the occurrence of a "critical
state" during functionality of (e.g.) cortical networks, and describe the
precise nature of such criticality in the model.Comment: 18 pages, 9 figure
- …