19,002 research outputs found
Neural Distributed Autoassociative Memories: A Survey
Introduction. Neural network models of autoassociative, distributed memory
allow storage and retrieval of many items (vectors) where the number of stored
items can exceed the vector dimension (the number of neurons in the network).
This opens the possibility of a sublinear time search (in the number of stored
items) for approximate nearest neighbors among vectors of high dimension. The
purpose of this paper is to review models of autoassociative, distributed
memory that can be naturally implemented by neural networks (mainly with local
learning rules and iterative dynamics based on information locally available to
neurons). Scope. The survey is focused mainly on the networks of Hopfield,
Willshaw and Potts, that have connections between pairs of neurons and operate
on sparse binary vectors. We discuss not only autoassociative memory, but also
the generalization properties of these networks. We also consider neural
networks with higher-order connections and networks with a bipartite graph
structure for non-binary data with linear constraints. Conclusions. In
conclusion we discuss the relations to similarity search, advantages and
drawbacks of these techniques, and topics for further research. An interesting
and still not completely resolved question is whether neural autoassociative
memories can search for approximate nearest neighbors faster than other index
structures for similarity search, in particular for the case of very high
dimensional vectors.Comment: 31 page
Real time unsupervised learning of visual stimuli in neuromorphic VLSI systems
Neuromorphic chips embody computational principles operating in the nervous
system, into microelectronic devices. In this domain it is important to
identify computational primitives that theory and experiments suggest as
generic and reusable cognitive elements. One such element is provided by
attractor dynamics in recurrent networks. Point attractors are equilibrium
states of the dynamics (up to fluctuations), determined by the synaptic
structure of the network; a `basin' of attraction comprises all initial states
leading to a given attractor upon relaxation, hence making attractor dynamics
suitable to implement robust associative memory. The initial network state is
dictated by the stimulus, and relaxation to the attractor state implements the
retrieval of the corresponding memorized prototypical pattern. In a previous
work we demonstrated that a neuromorphic recurrent network of spiking neurons
and suitably chosen, fixed synapses supports attractor dynamics. Here we focus
on learning: activating on-chip synaptic plasticity and using a theory-driven
strategy for choosing network parameters, we show that autonomous learning,
following repeated presentation of simple visual stimuli, shapes a synaptic
connectivity supporting stimulus-selective attractors. Associative memory
develops on chip as the result of the coupled stimulus-driven neural activity
and ensuing synaptic dynamics, with no artificial separation between learning
and retrieval phases.Comment: submitted to Scientific Repor
Analog hardware for learning neural networks
This is a recurrent or feedforward analog neural network processor having a multi-level neuron array and a synaptic matrix for storing weighted analog values of synaptic connection strengths which is characterized by temporarily changing one connection strength at a time to determine its effect on system output relative to the desired target. That connection strength is then adjusted based on the effect, whereby the processor is taught the correct response to training examples connection by connection
Statistical physics of neural systems with non-additive dendritic coupling
How neurons process their inputs crucially determines the dynamics of
biological and artificial neural networks. In such neural and neural-like
systems, synaptic input is typically considered to be merely transmitted
linearly or sublinearly by the dendritic compartments. Yet, single-neuron
experiments report pronounced supralinear dendritic summation of sufficiently
synchronous and spatially close-by inputs. Here, we provide a statistical
physics approach to study the impact of such non-additive dendritic processing
on single neuron responses and the performance of associative memory tasks in
artificial neural networks. First, we compute the effect of random input to a
neuron incorporating nonlinear dendrites. This approach is independent of the
details of the neuronal dynamics. Second, we use those results to study the
impact of dendritic nonlinearities on the network dynamics in a paradigmatic
model for associative memory, both numerically and analytically. We find that
dendritic nonlinearities maintain network convergence and increase the
robustness of memory performance against noise. Interestingly, an intermediate
number of dendritic branches is optimal for memory functionality
Dynamical model of sequential spatial memory: winnerless competition of patterns
We introduce a new biologically-motivated model of sequential spatial memory
which is based on the principle of winnerless competition (WLC). We implement
this mechanism in a two-layer neural network structure and present the learning
dynamics which leads to the formation of a WLC network. After learning, the
system is capable of associative retrieval of pre-recorded sequences of spatial
patterns.Comment: 4 pages, submitted to PR
- …