5,540 research outputs found
Analysis of Bidirectional Associative Memory using SCSNA and Statistical Neurodynamics
Bidirectional associative memory (BAM) is a kind of an artificial neural
network used to memorize and retrieve heterogeneous pattern pairs. Many efforts
have been made to improve BAM from the the viewpoint of computer application,
and few theoretical studies have been done. We investigated the theoretical
characteristics of BAM using a framework of statistical-mechanical analysis. To
investigate the equilibrium state of BAM, we applied self-consistent signal to
noise analysis (SCSNA) and obtained a macroscopic parameter equations and
relative capacity. Moreover, to investigate not only the equilibrium state but
also the retrieval process of reaching the equilibrium state, we applied
statistical neurodynamics to the update rule of BAM and obtained evolution
equations for the macroscopic parameters. These evolution equations are
consistent with the results of SCSNA in the equilibrium state.Comment: 13 pages, 4 figure
A study of pattern recovery in recurrent correlation associative memories
In this paper, we analyze the recurrent correlation associative memory (RCAM) model of Chiueh and Goodman. This is an associative memory in which stored binary memory patterns are recalled via an iterative update rule. The update of the individual pattern-bits is controlled by an excitation function, which takes as its arguement the inner product between the stored memory patterns and the input patterns. Our contribution is to analyze the dynamics of pattern recall when the input patterns are corrupted by noise of a relatively unrestricted class. We make three contributions. First, we show how to identify the excitation function which maximizes the separation (the Fisher discriminant) between the uncorrupted realization of the noisy input pattern and the remaining patterns residing in the memory. Moreover, we show that the excitation function which gives maximum separation is exponential when the input bit-errors follow a binomial distribution. Our second contribution is to develop an expression for the expectation value of bit-error probability on the input pattern after one iteration. We show how to identify the excitation function which minimizes the bit-error probability. However, there is no closed-form solution and the excitation function must be recovered numerically. The relationship between the excitation functions which result from the two different approaches is examined for a binomial distribution of bit-errors. The final contribution is to develop a semiempirical approach to the modeling of the dynamics of the RCAM. This provides us with a numerical means of predicting the recall error rate of the memory. It also allows us to develop an expression for the storage capacity for a given recall error rate
Experience-driven formation of parts-based representations in a model of layered visual memory
Growing neuropsychological and neurophysiological evidence suggests that the
visual cortex uses parts-based representations to encode, store and retrieve
relevant objects. In such a scheme, objects are represented as a set of
spatially distributed local features, or parts, arranged in stereotypical
fashion. To encode the local appearance and to represent the relations between
the constituent parts, there has to be an appropriate memory structure formed
by previous experience with visual objects. Here, we propose a model how a
hierarchical memory structure supporting efficient storage and rapid recall of
parts-based representations can be established by an experience-driven process
of self-organization. The process is based on the collaboration of slow
bidirectional synaptic plasticity and homeostatic unit activity regulation,
both running at the top of fast activity dynamics with winner-take-all
character modulated by an oscillatory rhythm. These neural mechanisms lay down
the basis for cooperation and competition between the distributed units and
their synaptic connections. Choosing human face recognition as a test task, we
show that, under the condition of open-ended, unsupervised incremental
learning, the system is able to form memory traces for individual faces in a
parts-based fashion. On a lower memory layer the synaptic structure is
developed to represent local facial features and their interrelations, while
the identities of different persons are captured explicitly on a higher layer.
An additional property of the resulting representations is the sparseness of
both the activity during the recall and the synaptic patterns comprising the
memory traces.Comment: 34 pages, 12 Figures, 1 Table, published in Frontiers in
Computational Neuroscience (Special Issue on Complex Systems Science and
Brain Dynamics),
http://www.frontiersin.org/neuroscience/computationalneuroscience/paper/10.3389/neuro.10/015.2009
Quantum Associative Memory
This paper combines quantum computation with classical neural network theory
to produce a quantum computational learning algorithm. Quantum computation uses
microscopic quantum level effects to perform computational tasks and has
produced results that in some cases are exponentially faster than their
classical counterparts. The unique characteristics of quantum theory may also
be used to create a quantum associative memory with a capacity exponential in
the number of neurons. This paper combines two quantum computational algorithms
to produce such a quantum associative memory. The result is an exponential
increase in the capacity of the memory when compared to traditional associative
memories such as the Hopfield network. The paper covers necessary high-level
quantum mechanical and quantum computational ideas and introduces a quantum
associative memory. Theoretical analysis proves the utility of the memory, and
it is noted that a small version should be physically realizable in the near
future
Non-Convex Multi-species Hopfield models
In this work we introduce a multi-species generalization of the Hopfield
model for associative memory, where neurons are divided into groups and both
inter-groups and intra-groups pair-wise interactions are considered, with
different intensities. Thus, this system contains two of the main ingredients
of modern Deep neural network architectures: Hebbian interactions to store
patterns of information and multiple layers coding different levels of
correlations. The model is completely solvable in the low-load regime with a
suitable generalization of the Hamilton-Jacobi technique, despite the
Hamiltonian can be a non-definite quadratic form of the magnetizations. The
family of multi-species Hopfield model includes, as special cases, the 3-layers
Restricted Boltzmann Machine (RBM) with Gaussian hidden layer and the
Bidirectional Associative Memory (BAM) model.Comment: This is a pre-print of an article published in J. Stat. Phy
Sparse neural networks with large learning diversity
Coded recurrent neural networks with three levels of sparsity are introduced.
The first level is related to the size of messages, much smaller than the
number of available neurons. The second one is provided by a particular coding
rule, acting as a local constraint in the neural activity. The third one is a
characteristic of the low final connection density of the network after the
learning phase. Though the proposed network is very simple since it is based on
binary neurons and binary connections, it is able to learn a large number of
messages and recall them, even in presence of strong erasures. The performance
of the network is assessed as a classifier and as an associative memory
Optimal learning rules for discrete synapses
There is evidence that biological synapses have a limited number of discrete weight states. Memory storage with such synapses behaves quite differently from synapses with unbounded, continuous weights, as old memories are automatically overwritten by new memories. Consequently, there has been substantial discussion about how this affects learning and storage capacity. In this paper, we calculate the storage capacity of discrete, bounded synapses in terms of Shannon information. We use this to optimize the learning rules and investigate how the maximum information capacity depends on the number of synapses, the number of synaptic states, and the coding sparseness. Below a certain critical number of synapses per neuron (comparable to numbers found in biology), we find that storage is similar to unbounded, continuous synapses. Hence, discrete synapses do not necessarily have lower storage capacity
- âŠ