271 research outputs found

    An iterative incremental learning algorithm for complex-valued hopfield associative memory

    Get PDF
    This paper discusses a complex-valued Hopfield associative memory with an iterative incremental learning algorithm. The mathematical proofs derive that the weight matrix is approximated as a weight matrix by the complex-valued pseudo inverse algorithm. Furthermore, the minimum number of iterations for the learning sequence is defined with maintaining the network stability. From the result of simulation experiment in terms of memory capacity and noise tolerance, the proposed model has the superior ability than the model with a complexvalued pseudo inverse learning algorithm

    Adiabatic Quantum Optimization for Associative Memory Recall

    Get PDF
    Hopfield networks are a variant of associative memory that recall information stored in the couplings of an Ising model. Stored memories are fixed points for the network dynamics that correspond to energetic minima of the spin state. We formulate the recall of memories stored in a Hopfield network using energy minimization by adiabatic quantum optimization (AQO). Numerical simulations of the quantum dynamics allow us to quantify the AQO recall accuracy with respect to the number of stored memories and the noise in the input key. We also investigate AQO performance with respect to how memories are stored in the Ising model using different learning rules. Our results indicate that AQO performance varies strongly with learning rule due to the changes in the energy landscape. Consequently, learning rules offer indirect methods for investigating change to the computational complexity of the recall task and the computational efficiency of AQO.Comment: 22 pages, 11 figures. Updated for clarity and figures, to appear in Frontiers of Physic

    Oscillatory neural network learning for pattern recognition:an on-chip learning perspective and implementation

    Get PDF
    In the human brain, learning is continuous, while currently in AI, learning algorithms are pre-trained, making the model non-evolutive and predetermined. However, even in AI models, environment and input data change over time. Thus, there is a need to study continual learning algorithms. In particular, there is a need to investigate how to implement such continual learning algorithms on-chip. In this work, we focus on Oscillatory Neural Networks (ONNs), a neuromorphic computing paradigm performing auto-associative memory tasks, like Hopfield Neural Networks (HNNs). We study the adaptability of the HNN unsupervised learning rules to on-chip learning with ONN. In addition, we propose a first solution to implement unsupervised on-chip learning using a digital ONN design. We show that the architecture enables efficient ONN on-chip learning with Hebbian and Storkey learning rules in hundreds of microseconds for networks with up to 35 fully-connected digital oscillators.</p

    Learning Schemes for Recurrent Neural Networks

    Get PDF
    兵庫県立大学大学院202

    Learning as a Nonlinear Line of Attraction for Pattern Association, Classification and Recognition

    Get PDF
    Development of a mathematical model for learning a nonlinear line of attraction is presented in this dissertation, in contrast to the conventional recurrent neural network model in which the memory is stored in an attractive fixed point at discrete location in state space. A nonlinear line of attraction is the encapsulation of attractive fixed points scattered in state space as an attractive nonlinear line, describing patterns with similar characteristics as a family of patterns. It is usually of prime imperative to guarantee the convergence of the dynamics of the recurrent network for associative learning and recall. We propose to alter this picture. That is, if the brain remembers by converging to the state representing familiar patterns, it should also diverge from such states when presented by an unknown encoded representation of a visual image. The conception of the dynamics of the nonlinear line attractor network to operate between stable and unstable states is the second contribution in this dissertation research. These criteria can be used to circumvent the plasticity-stability dilemma by using the unstable state as an indicator to create a new line for an unfamiliar pattern. This novel learning strategy utilizes stability (convergence) and instability (divergence) criteria of the designed dynamics to induce self-organizing behavior. The self-organizing behavior of the nonlinear line attractor model can manifest complex dynamics in an unsupervised manner. The third contribution of this dissertation is the introduction of the concept of manifold of color perception. The fourth contribution of this dissertation is the development of a nonlinear dimensionality reduction technique by embedding a set of related observations into a low-dimensional space utilizing the result attained by the learned memory matrices of the nonlinear line attractor network. Development of a system for affective states computation is also presented in this dissertation. This system is capable of extracting the user\u27s mental state in real time using a low cost computer. It is successfully interfaced with an advanced learning environment for human-computer interaction

    Analysing and enhancing the performance of associative memory architectures

    Get PDF
    This thesis investigates the way in which information about the structure of a set of training data with 'natural' characteristics may be used to positively influence the design of associative memory neural network models of the Hopfield type. This is done with a view to reducing the level of connectivity in models of this type. There are three strands to this work. Firstly, an empirical evaluation of the implementation of existing theory is given. Secondly, a number of existing theories are combined to produce novel network models and training regimes. Thirdly, new strategies for constructing and training associative memories based on knowledge of the structure of the training data are proposed. The first conclusion of this work is that, under certain circumstances, performance benefits may be gained by establishing the connectivity in a non-random fashion, guided by the knowledge gained from the structure of the training data. These performance improvements exist in relation to networks in which sparse connectivity is established in a purely random manner. This dilution occurs prior to the training of the network. Secondly, it is verified that, as predicted by existing theory, targeted post-training dilution of network connectivity provides greater performance when compared with networks in which connections are removed at random. Finally, an existing tool for the analysis of the attractor performance of neural networks of this type has been modified and improved. Furthermore, a novel, comprehensive performance analysis tool is proposed

    Investigations into the capabilities of the SDM and combining CMAC with PURR-PUSS.

    Get PDF
    This thesis consists of two sections analysing aspects of associative memories. The first section compares the usefulness, limitations, and similarities of the sparse distributed memory (SDM), the cerebella model articulation controller (CMAC) and the Hopfield network. This analysis leads in the second section to a proposal for combining CMAC with a form of robot learning through exploration, the PURR-PUSS system. It is then demonstrated the combination of the PURR-PUSS and CMAC systems produce a system capable of robot control. There are a number of critical factors in the performance of a neural network as a memory. These include the capacity and the efficiency of the training. Of the three networks considered, the Hopfield network is by far the most common in the literature. In spite of this, this thesis shows that the SDM and CMAC are almost identical and, in fact, have significant advantages over the Hopfield network in terms of capacity. This is particularly evident in the storage of sequences, where the SDM shows a significant improvement over the Hopfield network. The major contribution of this thesis is the analysis and development of the full potential of the SDM for data storage. The first contribution is a correction of an error in the existing analysis of the capacity of the SDM. The corrected figure is verified both theoretically and experimentally. The second contribution is an improvement in capacity resulting from an alternative method of generating the outputs. Finally, the capacity is further improved, by using an iterative approach to information storage previously employed on the Hopfield network. The latter approach helps produce a significant advantage in capacity for SDM. Another contribution of this thesis is the combination of associative memory with the a means of learning through experimentation. The PURR-PUSS system was originally developed as a means to enable a robot to learn through interacting with its environment. It is shown that its strengths and weaknesses complement those of the CMAC and SDM systems. PURR-PUSS and CMAC are combined and the result is a system which is capable of superior control than either system by itself This is demonstrated through an example, in which the combined system learns to control a ball rolling in a tilting maze of unknown dynamics. The system begins by learning through random exploration controlled by the PURR-PUSS system. As the knowledge of the environment increases, the PURR-PUSS system is able to successfully achieve goals, although the quality of the control is poor. However the addition of CMAC which in turn learns from PURR-PUSS's movements produces an improvement in the quality of the control

    The stability and attractivity of neural associative memories.

    Get PDF
    Han-bing Ji.Thesis (Ph.D.)--Chinese University of Hong Kong, 1996.Includes bibliographical references (p. 160-163).Microfiche. Ann Arbor, Mich.: UMI, 1998. 2 microfiches ; 11 x 15 cm

    Reinforcing connectionism: learning the statistical way

    Get PDF
    Connectionism's main contribution to cognitive science will prove to be the renewed impetus it has imparted to learning. Learning can be integrated into the existing theoretical foundations of the subject, and the combination, statistical computational theories, provide a framework within which many connectionist mathematical mechanisms naturally fit. Examples from supervised and reinforcement learning demonstrate this. Statistical computational theories already exist for certainn associative matrix memories. This work is extended, allowing real valued synapses and arbitrarily biased inputs. It shows that a covariance learning rule optimises the signal/noise ratio, a measure of the potential quality of the memory, and quantifies the performance penalty incurred by other rules. In particular two that have been suggested as occuring naturally are shown to be asymptotically optimal in the limit of sparse coding. The mathematical model is justified in comparison with other treatments whose results differ. Reinforcement comparison is a way of hastening the learning of reinforcement learning systems in statistical environments. Previous theoretical analysis has not distinguished between different comparison terms, even though empirically, a covariance rule has been shown to be better than just a constant one. The workings of reinforcement comparison are investigated by a second order analysis of the expected statistical performance of learning, and an alternative rule is proposed and empirically justified. The existing proof that temporal difference prediction learning converges in the mean is extended from a special case involving adjacent time steps to the general case involving arbitary ones. The interaction between the statistical mechanism of temporal difference and the linear representation is particularly stark. The performance of the method given a linearly dependent representation is also analysed
    corecore