11,347 research outputs found
Birth of a Learning Law
Defense Advanced Research Projects Agency; Office of Naval Research (N00014-95-1-0409, N00014-95-1-0657, N00014-92-J-1309
A ferrofluid based neural network: design of an analogue associative memory
We analyse an associative memory based on a ferrofluid, consisting of a
system of magnetic nano-particles suspended in a carrier fluid of variable
viscosity subject to patterns of magnetic fields from an array of input and
output magnetic pads. The association relies on forming patterns in the
ferrofluid during a trainingdphase, in which the magnetic dipoles are free to
move and rotate to minimize the total energy of the system. Once equilibrated
in energy for a given input-output magnetic field pattern-pair the particles
are fully or partially immobilized by cooling the carrier liquid. Thus produced
particle distributions control the memory states, which are read out
magnetically using spin-valve sensors incorporated in the output pads. The
actual memory consists of spin distributions that is dynamic in nature,
realized only in response to the input patterns that the system has been
trained for. Two training algorithms for storing multiple patterns are
investigated. Using Monte Carlo simulations of the physical system we
demonstrate that the device is capable of storing and recalling two sets of
images, each with an accuracy approaching 100%.Comment: submitted to Neural Network
Adaptive Resonance Associative Map: A Hierarchical ART System for Fast Stable Associative Learning
This paper introduces a new class of predictive ART architectures, called Adaptive Resonance Associative Map (ARAM) which performs rapid, yet stable heteroassociative learning in real time environment. ARAM can be visualized as two ART modules sharing a single recognition code layer. The unit for recruiting a recognition code is a pattern pair. Code stabilization is ensured by restricting coding to states where resonances are reached in both modules. Simulation results have shown that ARAM is capable of self-stabilizing association of arbitrary pattern pairs of arbitrary complexity appearing in arbitrary sequence by fast learning in real time environment. Due to the symmetrical network structure, associative recall can be performed in both directions.Air Force Office of Scientific Research (90-0128
An associative memory for the on-line recognition and prediction of temporal sequences
This paper presents the design of an associative memory with feedback that is
capable of on-line temporal sequence learning. A framework for on-line sequence
learning has been proposed, and different sequence learning models have been
analysed according to this framework. The network model is an associative
memory with a separate store for the sequence context of a symbol. A sparse
distributed memory is used to gain scalability. The context store combines the
functionality of a neural layer with a shift register. The sensitivity of the
machine to the sequence context is controllable, resulting in different
characteristic behaviours. The model can store and predict on-line sequences of
various types and length. Numerical simulations on the model have been carried
out to determine its properties.Comment: Published in IJCNN 2005, Montreal, Canad
Sparse neural networks with large learning diversity
Coded recurrent neural networks with three levels of sparsity are introduced.
The first level is related to the size of messages, much smaller than the
number of available neurons. The second one is provided by a particular coding
rule, acting as a local constraint in the neural activity. The third one is a
characteristic of the low final connection density of the network after the
learning phase. Though the proposed network is very simple since it is based on
binary neurons and binary connections, it is able to learn a large number of
messages and recall them, even in presence of strong erasures. The performance
of the network is assessed as a classifier and as an associative memory
Deep Complex Networks
At present, the vast majority of building blocks, techniques, and
architectures for deep learning are based on real-valued operations and
representations. However, recent work on recurrent neural networks and older
fundamental theoretical analysis suggests that complex numbers could have a
richer representational capacity and could also facilitate noise-robust memory
retrieval mechanisms. Despite their attractive properties and potential for
opening up entirely new neural architectures, complex-valued deep neural
networks have been marginalized due to the absence of the building blocks
required to design such models. In this work, we provide the key atomic
components for complex-valued deep neural networks and apply them to
convolutional feed-forward networks and convolutional LSTMs. More precisely, we
rely on complex convolutions and present algorithms for complex
batch-normalization, complex weight initialization strategies for
complex-valued neural nets and we use them in experiments with end-to-end
training schemes. We demonstrate that such complex-valued models are
competitive with their real-valued counterparts. We test deep complex models on
several computer vision tasks, on music transcription using the MusicNet
dataset and on Speech Spectrum Prediction using the TIMIT dataset. We achieve
state-of-the-art performance on these audio-related tasks
Adiabatic Quantum Optimization for Associative Memory Recall
Hopfield networks are a variant of associative memory that recall information
stored in the couplings of an Ising model. Stored memories are fixed points for
the network dynamics that correspond to energetic minima of the spin state. We
formulate the recall of memories stored in a Hopfield network using energy
minimization by adiabatic quantum optimization (AQO). Numerical simulations of
the quantum dynamics allow us to quantify the AQO recall accuracy with respect
to the number of stored memories and the noise in the input key. We also
investigate AQO performance with respect to how memories are stored in the
Ising model using different learning rules. Our results indicate that AQO
performance varies strongly with learning rule due to the changes in the energy
landscape. Consequently, learning rules offer indirect methods for
investigating change to the computational complexity of the recall task and the
computational efficiency of AQO.Comment: 22 pages, 11 figures. Updated for clarity and figures, to appear in
Frontiers of Physic
- …