139 research outputs found
Real time unsupervised learning of visual stimuli in neuromorphic VLSI systems
Neuromorphic chips embody computational principles operating in the nervous
system, into microelectronic devices. In this domain it is important to
identify computational primitives that theory and experiments suggest as
generic and reusable cognitive elements. One such element is provided by
attractor dynamics in recurrent networks. Point attractors are equilibrium
states of the dynamics (up to fluctuations), determined by the synaptic
structure of the network; a `basin' of attraction comprises all initial states
leading to a given attractor upon relaxation, hence making attractor dynamics
suitable to implement robust associative memory. The initial network state is
dictated by the stimulus, and relaxation to the attractor state implements the
retrieval of the corresponding memorized prototypical pattern. In a previous
work we demonstrated that a neuromorphic recurrent network of spiking neurons
and suitably chosen, fixed synapses supports attractor dynamics. Here we focus
on learning: activating on-chip synaptic plasticity and using a theory-driven
strategy for choosing network parameters, we show that autonomous learning,
following repeated presentation of simple visual stimuli, shapes a synaptic
connectivity supporting stimulus-selective attractors. Associative memory
develops on chip as the result of the coupled stimulus-driven neural activity
and ensuing synaptic dynamics, with no artificial separation between learning
and retrieval phases.Comment: submitted to Scientific Repor
A balanced memory network
A fundamental problem in neuroscience is understanding how working memory-the ability to store information at intermediate timescales, like tens of seconds-is implemented in realistic neuronal networks. The most likely candidate mechanism is the attractor network, and a great deal of effort has gone toward investigating it theoretically. Yet, despite almost a quarter century of intense work, attractor networks are not fully understood. In particular, there are still two unanswered questions. First, how is it that attractor networks exhibit irregular firing, as is observed experimentally during working memory tasks? And second, how many memories can be stored under biologically realistic conditions? Here we answer both questions by studying an attractor neural network in which inhibition and excitation balance each other. Using mean-field analysis, we derive a three-variable description of attractor networks. From this description it follows that irregular firing can exist only if the number of neurons involved in a memory is large. The same mean-field analysis also shows that the number of memories that can be stored in a network scales with the number of excitatory connections, a result that has been suggested for simple models but never shown for realistic ones. Both of these predictions are verified using simulations with large networks of spiking neurons
The Performance of Associative Memory Models with Biologically Inspired Connectivity
This thesis is concerned with one important question in artificial neural networks, that is, how biologically inspired connectivity of a network affects its associative memory performance.
In recent years, research on the mammalian cerebral cortex, which has the main
responsibility for the associative memory function in the brains, suggests that
the connectivity of this cortical network is far from fully connected, which is
commonly assumed in traditional associative memory models. It is found to
be a sparse network with interesting connectivity characteristics such as the
“small world network” characteristics, represented by short Mean Path Length,
high Clustering Coefficient, and high Global and Local Efficiency. Most of the networks in this thesis are therefore sparsely connected.
There is, however, no conclusive evidence of how these different connectivity
characteristics affect the associative memory performance of a network. This
thesis addresses this question using networks with different types of
connectivity, which are inspired from biological evidences.
The findings of this programme are unexpected and important. Results show
that the performance of a non-spiking associative memory model is found to be
predicted by its linear correlation with the Clustering Coefficient of the network,
regardless of the detailed connectivity patterns. This is particularly important
because the Clustering Coefficient is a static measure of one aspect of
connectivity, whilst the associative memory performance reflects the result of a
complex dynamic process.
On the other hand, this research reveals that improvements in the performance
of a network do not necessarily directly rely on an increase in the network’s
wiring cost. Therefore it is possible to construct networks with high
associative memory performance but relatively low wiring cost. Particularly,
Gaussian distributed connectivity in a network is found to achieve the best
performance with the lowest wiring cost, in all examined connectivity models.
Our results from this programme also suggest that a modular network with an
appropriate configuration of Gaussian distributed connectivity, both internal to
each module and across modules, can perform nearly as well as the Gaussian
distributed non-modular network.
Finally, a comparison between non-spiking and spiking associative memory
models suggests that in terms of associative memory performance, the
implication of connectivity seems to transcend the details of the actual neural
models, that is, whether they are spiking or non-spiking neurons
Binary Willshaw learning yields high synaptic capacity for long-term familiarity memory
We investigate from a computational perspective the efficiency of the
Willshaw synaptic update rule in the context of familiarity discrimination, a
binary-answer, memory-related task that has been linked through psychophysical
experiments with modified neural activity patterns in the prefrontal and
perirhinal cortex regions. Our motivation for recovering this well-known
learning prescription is two-fold: first, the switch-like nature of the induced
synaptic bonds, as there is evidence that biological synaptic transitions might
occur in a discrete stepwise fashion. Second, the possibility that in the
mammalian brain, unused, silent synapses might be pruned in the long-term.
Besides the usual pattern and network capacities, we calculate the synaptic
capacity of the model, a recently proposed measure where only the functional
subset of synapses is taken into account. We find that in terms of network
capacity, Willshaw learning is strongly affected by the pattern coding rates,
which have to be kept fixed and very low at any time to achieve a non-zero
capacity in the large network limit. The information carried per functional
synapse, however, diverges and is comparable to that of the pattern association
case, even for more realistic moderately low activity levels that are a
function of network size.Comment: 20 pages, 4 figure
Fast and robust learning by reinforcement signals: explorations in the insect brain
We propose a model for pattern recognition in the insect brain. Departing from a well-known body of knowledge about the insect brain, we investigate which of the potentially present features may be useful to learn input patterns rapidly and in a stable manner. The plasticity underlying pattern recognition is situated in the insect mushroom bodies and requires an error signal to associate the stimulus with a proper response. As a proof of concept, we used our model insect brain to classify the well-known MNIST database of handwritten digits, a popular benchmark for classifiers. We show that the structural organization of the insect brain appears to be suitable for both fast learning of new stimuli and reasonable performance in stationary conditions. Furthermore, it is extremely robust to damage to the brain structures involved in sensory processing. Finally, we suggest that spatiotemporal dynamics can improve the level of confidence in a classification decision. The proposed approach allows testing the effect of hypothesized mechanisms rather than speculating on their benefit for system performance or confidence in its responses
Optimal Learning with Excitatory and Inhibitory synapses
Characterizing the relation between weight structure and input/output
statistics is fundamental for understanding the computational capabilities of
neural circuits. In this work, I study the problem of storing associations
between analog signals in the presence of correlations, using methods from
statistical mechanics. I characterize the typical learning performance in terms
of the power spectrum of random input and output processes. I show that optimal
synaptic weight configurations reach a capacity of 0.5 for any fraction of
excitatory to inhibitory weights and have a peculiar synaptic distribution with
a finite fraction of silent synapses. I further provide a link between typical
learning performance and principal components analysis in single cases. These
results may shed light on the synaptic profile of brain circuits, such as
cerebellar structures, that are thought to engage in processing time-dependent
signals and performing on-line prediction.Comment: 16 pages, 5 figure
- …