233 research outputs found
Analysis of Bidirectional Associative Memory using SCSNA and Statistical Neurodynamics
Bidirectional associative memory (BAM) is a kind of an artificial neural
network used to memorize and retrieve heterogeneous pattern pairs. Many efforts
have been made to improve BAM from the the viewpoint of computer application,
and few theoretical studies have been done. We investigated the theoretical
characteristics of BAM using a framework of statistical-mechanical analysis. To
investigate the equilibrium state of BAM, we applied self-consistent signal to
noise analysis (SCSNA) and obtained a macroscopic parameter equations and
relative capacity. Moreover, to investigate not only the equilibrium state but
also the retrieval process of reaching the equilibrium state, we applied
statistical neurodynamics to the update rule of BAM and obtained evolution
equations for the macroscopic parameters. These evolution equations are
consistent with the results of SCSNA in the equilibrium state.Comment: 13 pages, 4 figure
An analog feedback associative memory
A method for the storage of analog vectors, i.e., vectors whose components are real-valued, is developed for the Hopfield continuous-time network. An important requirement is that each memory vector has to be an asymptotically stable (i.e. attractive) equilibrium of the network. Some of the limitations imposed by the continuous Hopfield model on the set of vectors that can be stored are pointed out. These limitations can be relieved by choosing a network containing visible as well as hidden units. An architecture consisting of several hidden layers and a visible layer, connected in a circular fashion, is considered. It is proved that the two-layer case is guaranteed to store any number of given analog vectors provided their number does not exceed 1 + the number of neurons in the hidden layer. A learning algorithm that correctly adjusts the locations of the equilibria and guarantees their asymptotic stability is developed. Simulation results confirm the effectiveness of the approach
Associative memory of phase-coded spatiotemporal patterns in leaky Integrate and Fire networks
We study the collective dynamics of a Leaky Integrate and Fire network in
which precise relative phase relationship of spikes among neurons are stored,
as attractors of the dynamics, and selectively replayed at differentctime
scales. Using an STDP-based learning process, we store in the connectivity
several phase-coded spike patterns, and we find that, depending on the
excitability of the network, different working regimes are possible, with
transient or persistent replay activity induced by a brief signal. We introduce
an order parameter to evaluate the similarity between stored and recalled
phase-coded pattern, and measure the storage capacity. Modulation of spiking
thresholds during replay changes the frequency of the collective oscillation or
the number of spikes per cycle, keeping preserved the phases relationship. This
allows a coding scheme in which phase, rate and frequency are dissociable.
Robustness with respect to noise and heterogeneity of neurons parameters is
studied, showing that, since dynamics is a retrieval process, neurons preserve
stablecprecise phase relationship among units, keeping a unique frequency of
oscillation, even in noisy conditions and with heterogeneity of internal
parameters of the units
Associative neural networks: properties, learning, and applications.
by Chi-sing Leung.Thesis (Ph.D.)--Chinese University of Hong Kong, 1994.Includes bibliographical references (leaves 236-244).Chapter 1 --- Introduction --- p.1Chapter 1.1 --- Background of Associative Neural Networks --- p.1Chapter 1.2 --- A Distributed Encoding Model: Bidirectional Associative Memory --- p.3Chapter 1.3 --- A Direct Encoding Model: Kohonen Map --- p.6Chapter 1.4 --- Scope and Organization --- p.9Chapter 1.5 --- Summary of Publications --- p.13Chapter I --- Bidirectional Associative Memory: Statistical Proper- ties and Learning --- p.17Chapter 2 --- Introduction to Bidirectional Associative Memory --- p.18Chapter 2.1 --- Bidirectional Associative Memory and its Encoding Method --- p.18Chapter 2.2 --- Recall Process of BAM --- p.20Chapter 2.3 --- Stability of BAM --- p.22Chapter 2.4 --- Memory Capacity of BAM --- p.24Chapter 2.5 --- Error Correction Capability of BAM --- p.28Chapter 2.6 --- Chapter Summary --- p.29Chapter 3 --- Memory Capacity and Statistical Dynamics of First Order BAM --- p.31Chapter 3.1 --- Introduction --- p.31Chapter 3.2 --- Existence of Energy Barrier --- p.34Chapter 3.3 --- Memory Capacity from Energy Barrier --- p.44Chapter 3.4 --- Confidence Dynamics --- p.49Chapter 3.5 --- Numerical Results from the Dynamics --- p.63Chapter 3.6 --- Chapter Summary --- p.68Chapter 4 --- Stability and Statistical Dynamics of Second order BAM --- p.70Chapter 4.1 --- Introduction --- p.70Chapter 4.2 --- Second order BAM and its Stability --- p.71Chapter 4.3 --- Confidence Dynamics of Second Order BAM --- p.75Chapter 4.4 --- Numerical Results --- p.82Chapter 4.5 --- Extension to higher order BAM --- p.90Chapter 4.6 --- Verification of the conditions of Newman's Lemma --- p.94Chapter 4.7 --- Chapter Summary --- p.95Chapter 5 --- Enhancement of BAM --- p.97Chapter 5.1 --- Background --- p.97Chapter 5.2 --- Review on Modifications of BAM --- p.101Chapter 5.2.1 --- Change of the encoding method --- p.101Chapter 5.2.2 --- Change of the topology --- p.105Chapter 5.3 --- Householder Encoding Algorithm --- p.107Chapter 5.3.1 --- Construction from Householder Transforms --- p.107Chapter 5.3.2 --- Construction from iterative method --- p.109Chapter 5.3.3 --- Remarks on HCA --- p.111Chapter 5.4 --- Enhanced Householder Encoding Algorithm --- p.112Chapter 5.4.1 --- Construction of EHCA --- p.112Chapter 5.4.2 --- Remarks on EHCA --- p.114Chapter 5.5 --- Bidirectional Learning --- p.115Chapter 5.5.1 --- Construction of BL --- p.115Chapter 5.5.2 --- The Convergence of BL and the memory capacity of BL --- p.116Chapter 5.5.3 --- Remarks on BL --- p.120Chapter 5.6 --- Adaptive Ho-Kashyap Bidirectional Learning --- p.121Chapter 5.6.1 --- Construction of AHKBL --- p.121Chapter 5.6.2 --- Convergent Conditions for AHKBL --- p.124Chapter 5.6.3 --- Remarks on AHKBL --- p.125Chapter 5.7 --- Computer Simulations --- p.126Chapter 5.7.1 --- Memory Capacity --- p.126Chapter 5.7.2 --- Error Correction Capability --- p.130Chapter 5.7.3 --- Learning Speed --- p.157Chapter 5.8 --- Chapter Summary --- p.158Chapter 6 --- BAM under Forgetting Learning --- p.160Chapter 6.1 --- Introduction --- p.160Chapter 6.2 --- Properties of Forgetting Learning --- p.162Chapter 6.3 --- Computer Simulations --- p.168Chapter 6.4 --- Chapter Summary --- p.168Chapter II --- Kohonen Map: Applications in Data compression and Communications --- p.170Chapter 7 --- Introduction to Vector Quantization and Kohonen Map --- p.171Chapter 7.1 --- Background on Vector quantization --- p.171Chapter 7.2 --- Introduction to LBG algorithm --- p.173Chapter 7.3 --- Introduction to Kohonen Map --- p.174Chapter 7.4 --- Chapter Summary --- p.179Chapter 8 --- Applications of Kohonen Map in Data Compression and Communi- cations --- p.181Chapter 8.1 --- Use Kohonen Map to design Trellis Coded Vector Quantizer --- p.182Chapter 8.1.1 --- Trellis Coded Vector Quantizer --- p.182Chapter 8.1.2 --- Trellis Coded Kohonen Map --- p.188Chapter 8.1.3 --- Computer Simulations --- p.191Chapter 8.2 --- Kohonen MapiCombined Vector Quantization and Modulation --- p.195Chapter 8.2.1 --- Impulsive Noise in the received data --- p.195Chapter 8.2.2 --- Combined Kohonen Map and Modulation --- p.198Chapter 8.2.3 --- Computer Simulations --- p.200Chapter 8.3 --- Error Control Scheme for the Transmission of Vector Quantized Data --- p.213Chapter 8.3.1 --- Motivation and Background --- p.214Chapter 8.3.2 --- Trellis Coded Modulation --- p.216Chapter 8.3.3 --- "Combined Vector Quantization, Error Control, and Modulation" --- p.220Chapter 8.3.4 --- Computer Simulations --- p.223Chapter 8.4 --- Chapter Summary --- p.226Chapter 9 --- Conclusion --- p.232Bibliography --- p.23
New Learning and Control Algorithms for Neural Networks.
Neural networks offer distributed processing power, error correcting capability and structural simplicity of the basic computing element. Neural networks have been found to be attractive for applications such as associative memory, robotics, image processing, speech understanding and optimization. Neural networks are self-adaptive systems that try to configure themselves to store new information. This dissertation investigates two approaches to improve performance: better learning and supervisory control. A new learning algorithm called the Correlation Continuous Unlearning (CCU) algorithm is presented. It is based on the idea of removing undesirable information that is encountered during the learning period. The control methods proposed in the dissertation improve the convergence by affecting the order of updates using a controller. Most previous studies have focused on monolithic structures. But it is known that the human brain has a bicameral nature at the gross level and it also has several specialized structures. In this dissertation, we investigate the computing characteristics of neural networks that are not monolithic being enhanced by a controller that can run algorithms that take advantage of the known global characteristics of the stored information. Such networks have been called bicameral neural networks. Stinson and Kak considered elementary bicameral models that used asynchronous control. New control methods, the method of iteration and bicameral classifier, are now proposed. The method of iteration uses the Hamming distance between the probe and the answer to control the convergence to a correct answer, whereas the bicameral classifier takes advantage of global characteristics using a clustering algorithm. The bicameral classifier is applied to two different models of equiprobable patterns as well as the more realistic situation where patterns can have different probabilities. The CCU algorithm has also been applied to a bidirectional associative memory with greatly improved performance. For multilayered networks, indexing of patterns to enhance system performance has been studied
Theoretical study of information capacity of Hopfield neural network and its application to expert database system
The conventional computer systems can solve complex mathematical problems very fast, yet it can\u27t efficiently process high-level intelligent functions of human brain such as pattern recognition, categorization, and associative memory;A neural network is proposed as a computational structure for modeling high-level intelligent functions of human brain. Recently, neural networks have attracted considerable attentions as a novel computational system because of the following expected benefits which are often considered as generic characteristics of human brain: (1) massive parallelism, (2) learning as a means of efficient knowledge acquisition, and (3) robustness arising from distributed information processing;Neural networks are being studied from a different point of view in many disciplines such as psychology, mathematics, statistics, physics, engineering, computer science, neuroscience, biology, and linguistics. Depending on disciplines, neural networks have diverse nomenclature as artificial neural networks, connectionism, PDPs, adaptive systems, adaptive networks, and neurocomputers;We study the neural networks from the computer scientist\u27s point of view. The objectives of this research work are: (1) providing a global picture of the current state of the art by surveying a score of neural networks chronologically and functionally, (2) providing a theoretical justification for well-known empirical results about the information capacity of Hopfield neural network, and (3) providing an experimental logical database system using Hopfield neural network as an inference engine
Non-Convex Multi-species Hopfield models
In this work we introduce a multi-species generalization of the Hopfield
model for associative memory, where neurons are divided into groups and both
inter-groups and intra-groups pair-wise interactions are considered, with
different intensities. Thus, this system contains two of the main ingredients
of modern Deep neural network architectures: Hebbian interactions to store
patterns of information and multiple layers coding different levels of
correlations. The model is completely solvable in the low-load regime with a
suitable generalization of the Hamilton-Jacobi technique, despite the
Hamiltonian can be a non-definite quadratic form of the magnetizations. The
family of multi-species Hopfield model includes, as special cases, the 3-layers
Restricted Boltzmann Machine (RBM) with Gaussian hidden layer and the
Bidirectional Associative Memory (BAM) model.Comment: This is a pre-print of an article published in J. Stat. Phy
Multi-almost periodicity and invariant basins of general neural networks under almost periodic stimuli
In this paper, we investigate convergence dynamics of almost periodic
encoded patterns of general neural networks (GNNs) subjected to external almost
periodic stimuli, including almost periodic delays. Invariant regions are
established for the existence of almost periodic encoded patterns under
two classes of activation functions. By employing the property of
-cone and inequality technique, attracting basins are estimated
and some criteria are derived for the networks to converge exponentially toward
almost periodic encoded patterns. The obtained results are new, they
extend and generalize the corresponding results existing in previous
literature.Comment: 28 pages, 4 figure
Stability and Hopf-Bifurcation Analysis of Delayed BAM Neural Network under Dynamic Thresholds
In this paper the dynamics of a three neuron model with self-connection and distributed delay under dynamical threshold is investigated. With the help of topological degree theory and Homotopy invariance principle existence and uniqueness of equilibrium point are established. The conditions for which the Hopf-bifurcation occurs at the equilibrium are obtained for the weak kernel of the distributed delay. The direction and stability of the bifurcating periodic solutions are determined by the normal form theory and central manifold theorem. Lastly global bifurcation aspect of such periodic solutions is studied. Some numerical simulations for justifying the theoretical analysis are also presented
A Mismatch-Based Model for Memory Reconsolidation and Extinction in Attractor Networks
The processes of memory reconsolidation and extinction have received increasing attention in recent experimental research, as their potential clinical applications begin to be uncovered. A number of studies suggest that amnestic drugs injected after reexposure to a learning context can disrupt either of the two processes, depending on the behavioral protocol employed. Hypothesizing that reconsolidation represents updating of a memory trace in the hippocampus, while extinction represents formation of a new trace, we have built a neural network model in which either simple retrieval, reconsolidation or extinction of a stored attractor can occur upon contextual reexposure, depending on the similarity between the representations of the original learning and reexposure sessions. This is achieved by assuming that independent mechanisms mediate Hebbian-like synaptic strengthening and mismatch-driven labilization of synaptic changes, with protein synthesis inhibition preferentially affecting the former. Our framework provides a unified mechanistic explanation for experimental data showing (a) the effect of reexposure duration on the occurrence of reconsolidation or extinction and (b) the requirement of memory updating during reexposure to drive reconsolidation
- …