159 research outputs found
Sparse neural networks with large learning diversity
Coded recurrent neural networks with three levels of sparsity are introduced.
The first level is related to the size of messages, much smaller than the
number of available neurons. The second one is provided by a particular coding
rule, acting as a local constraint in the neural activity. The third one is a
characteristic of the low final connection density of the network after the
learning phase. Though the proposed network is very simple since it is based on
binary neurons and binary connections, it is able to learn a large number of
messages and recall them, even in presence of strong erasures. The performance
of the network is assessed as a classifier and as an associative memory
Energy Relaxation For Hopfield Network With The New Learning Rule.
In this paper, the time for energy relaxation for Little Hopfield neural network using the new activation rule is shown to be better than the relaxation time using Hebbian
learning
New Insights on Learning Rules for Hopfield Networks: Memory and Objective Function Minimisation
Hopfield neural networks are a possible basis for modelling associative
memory in living organisms. After summarising previous studies in the field, we
take a new look at learning rules, exhibiting them as descent-type algorithms
for various cost functions. We also propose several new cost functions suitable
for learning. We discuss the role of biases (the external inputs) in the
learning process in Hopfield networks. Furthermore, we apply Newtons method for
learning memories, and experimentally compare the performances of various
learning rules. Finally, to add to the debate whether allowing connections of a
neuron to itself enhances memory capacity, we numerically investigate the
effects of self coupling.
Keywords: Hopfield Networks, associative memory, content addressable memory,
learning rules, gradient descent, attractor networksComment: 8 pages, IEEE-Xplore, 2020 International Joint Conference on Neural
Networks (IJCNN), Glasgo
Hierarchical Associative Memory Based on Oscillatory Neural Network
In this thesis we explore algorithms and develop architectures based on emerging nano-device technologies for cognitive computing tasks such as recognition, classification, and vision. In particular we focus on pattern matching in high dimensional vector spaces to address the nearest neighbor search problem. Recent progress in nanotechnology provides us novel nano-devices with special nonlinear response characteristics that fit cognitive tasks better than general purpose computing. We build an associative memory (AM) by weakly coupling nano-oscillators as an oscillatory neural network and design a hierarchical tree structure to organize groups of AM units. For hierarchical recognition, we first examine an architecture where image patterns are partitioned into different receptive fields and processed by individual AM units in lower levels, and then abstracted using sparse coding techniques for recognition at higher levels. A second tree structure model is developed as a more scalable AM architecture for large data sets. In this model, patterns are classified by hierarchical k-means clustering and organized in hierarchical clusters. Then the recognition process is done by comparison between the input patterns and centroids identified in the clustering process. The tree is explored in a "depth-only" manner until the closest image pattern is output. We also extend this search technique to incorporate a branch-and-bound algorithm. The models and corresponding algorithms are tested on two standard face recognition data-sets. We show that the depth-only hierarchical model is very data-set dependent and performs with 97% or 67% recognition when compared to a single large associative memory, while the branch and bound search increases time by only a factor of two compared to the depth-only search
Analysing and enhancing the performance of associative memory architectures
This thesis investigates the way in which information about the structure of a set of
training data with 'natural' characteristics may be used to positively influence the design of
associative memory neural network models of the Hopfield type. This is done with a
view to reducing the level of connectivity in models of this type.
There are three strands to this work. Firstly, an empirical evaluation of the
implementation of existing theory is given. Secondly, a number of existing theories are
combined to produce novel network models and training regimes. Thirdly, new strategies
for constructing and training associative memories based on knowledge of the structure of
the training data are proposed.
The first conclusion of this work is that, under certain circumstances, performance benefits
may be gained by establishing the connectivity in a non-random fashion, guided by the
knowledge gained from the structure of the training data. These performance
improvements exist in relation to networks in which sparse connectivity is established in a
purely random manner. This dilution occurs prior to the training of the network.
Secondly, it is verified that, as predicted by existing theory, targeted post-training dilution
of network connectivity provides greater performance when compared with networks in
which connections are removed at random.
Finally, an existing tool for the analysis of the attractor performance of neural networks of
this type has been modified and improved. Furthermore, a novel, comprehensive
performance analysis tool is proposed
AHA! an 'Artificial Hippocampal Algorithm' for Episodic Machine Learning
The majority of ML research concerns slow, statistical learning of i.i.d.
samples from large, labelled datasets. Animals do not learn this way. An
enviable characteristic of animal learning is `episodic' learning - the ability
to memorise a specific experience as a composition of existing concepts, after
just one experience, without provided labels. The new knowledge can then be
used to distinguish between similar experiences, to generalise between classes,
and to selectively consolidate to long-term memory. The Hippocampus is known to
be vital to these abilities. AHA is a biologically-plausible computational
model of the Hippocampus. Unlike most machine learning models, AHA is trained
without external labels and uses only local credit assignment. We demonstrate
AHA in a superset of the Omniglot one-shot classification benchmark. The
extended benchmark covers a wider range of known hippocampal functions by
testing pattern separation, completion, and recall of original input. These
functions are all performed within a single configuration of the computational
model. Despite these constraints, image classification results are comparable
to conventional deep convolutional ANNs
The Performance of Associative Memory Models with Biologically Inspired Connectivity
This thesis is concerned with one important question in artificial neural networks, that is, how biologically inspired connectivity of a network affects its associative memory performance.
In recent years, research on the mammalian cerebral cortex, which has the main
responsibility for the associative memory function in the brains, suggests that
the connectivity of this cortical network is far from fully connected, which is
commonly assumed in traditional associative memory models. It is found to
be a sparse network with interesting connectivity characteristics such as the
“small world network” characteristics, represented by short Mean Path Length,
high Clustering Coefficient, and high Global and Local Efficiency. Most of the networks in this thesis are therefore sparsely connected.
There is, however, no conclusive evidence of how these different connectivity
characteristics affect the associative memory performance of a network. This
thesis addresses this question using networks with different types of
connectivity, which are inspired from biological evidences.
The findings of this programme are unexpected and important. Results show
that the performance of a non-spiking associative memory model is found to be
predicted by its linear correlation with the Clustering Coefficient of the network,
regardless of the detailed connectivity patterns. This is particularly important
because the Clustering Coefficient is a static measure of one aspect of
connectivity, whilst the associative memory performance reflects the result of a
complex dynamic process.
On the other hand, this research reveals that improvements in the performance
of a network do not necessarily directly rely on an increase in the network’s
wiring cost. Therefore it is possible to construct networks with high
associative memory performance but relatively low wiring cost. Particularly,
Gaussian distributed connectivity in a network is found to achieve the best
performance with the lowest wiring cost, in all examined connectivity models.
Our results from this programme also suggest that a modular network with an
appropriate configuration of Gaussian distributed connectivity, both internal to
each module and across modules, can perform nearly as well as the Gaussian
distributed non-modular network.
Finally, a comparison between non-spiking and spiking associative memory
models suggests that in terms of associative memory performance, the
implication of connectivity seems to transcend the details of the actual neural
models, that is, whether they are spiking or non-spiking neurons
- …