727 research outputs found

    Coupled-Oscillator Associative Memory Array Operation for Pattern Recognition

    Get PDF
    Operation of the array of coupled oscillators underlying the associative memory function is demonstrated for various interconnection schemes (cross-connect, star phase keying and star frequency keying) and various physical implementation of oscillators (van der Pol, phase-locked loop, spin torque). The speed of synchronization of oscillators and the evolution of the degree of matching is studied as a function of device parameters. The dependence of errors in association on the number of the memorized patterns and the distance between the test and the memorized pattern is determined for Palm, Furber and Hopfield association algorithms

    Investigating the storage capacity of a network with cell assemblies

    Get PDF
    Cell assemblies are co-operating groups of neurons believed to exist in the brain. Their existence was proposed by the neuropsychologist D.O. Hebb who also formulated a mechanism by which they could form, now known as Hebbian learning. Evidence for the existence of Hebbian learning and cell assemblies in the brain is accumulating as investigation tools improve. Researchers have also simulated cell assemblies as neural networks in computers. This thesis describes simulations of networks of cell assemblies. The feasibility of simulated cell assemblies that possess all the predicted properties of biological cell assemblies is established. Cell assemblies can be coupled together with weighted connections to form hierarchies in which a group of basic assemblies, termed primitives are connected in such a way that they form a compound cell assembly. The component assemblies of these hierarchies can be ignited independently, i.e. they are activated due to signals being passed entirely within the network, but if a sufficient number of them. are activated, they co-operate to ignite the remaining primitives in the compound assembly. Various experiments are described in which networks of simulated cell assemblies are subject to external activation involving cells in those assemblies being stimulated artificially to a high level. These cells then fire, i.e. produce a spike of activity analogous to the spiking of biological neurons, and in this way pass their activity to other cells. Connections are established, by learning in some experiments and set artificially in others, between cells within primitives and in different ones, and these connections allow activity to pass from one primitive to another. In this way, activating one or more primitives may cause others to ignite. Experiments are described in which spontaneous activation of cells aids recruitment of uncommitted cells to a neighbouring assembly. The strong relationship between cell assemblies and Hopfield nets is described. A network of simulated cells can support different numbers of assemblies depending on the complexity of those assemblies. Assemblies are classified in terms of how many primitives are present in each compound assembly and the minimum number needed to complete it. A 2-3 assembly contains 3 primitives, any 2 of which will complete it. A network of N cells can hold on the order of N 2-3 assemblies, and an architecture is proposed that contains O(N2) 3-4 assemblies. Experiments are described that show the number of connections emanating from each cell must be scaled up linearly as the number of primitives in any network .increases in order to maintain the same mean number of connections between each primitive. Restricting each cell to a maximum number of connections leads, to severe loss of performance as the size of the network increases. It is shown that the architecture can be duplicated with Hopfield nets, but that there are severe restrictions on the carrying capacity of either a hierarchy of cell assemblies or a Hopfield net storing 3-4 patterns, and that the promise of N2 patterns is largely illusory. When the number of connections from each cell is fixed as the number of primitives is increased, only O(N) cell assemblies can be stored

    Neural networks using for handwriting numbers recognition

    Get PDF
    V prezentované práci, Hopfieldova neuronová síť byla postavena pro rozpoznávání ručně psaného číslice vzory obsažené v MNIST databáze. Pro každou číslici bylo vybudováno deset neuronových sítí Hopfieldu. Středy shluků, které byly postaveny s využitím neuronové sítě Kohonen byly brány jako objekty pro "zapamatování". Byly navrženy dvě metody, které jsou podporovaným krokem v hopfieldské neurální síti; byla provedena analýza těchto metod. Také, chyba byla vypočtena pro každé metody, výhody a nevýhody jejich použití byly identifikovány. Seskupení ručně psaných číslic z tréninkového vzorku MNIST databáze se provádí. Clustering is performed using a Kohonen neural network. Pro každou číslici je zvolen optimální počet seskupení (nepřesahující 50). As a metric for Kohonen network, the Euclidean norm is used. Síť je vycvičena sériovým algoritmem na procesoru a paralelním algoritmem na GPU pomocí technologie CUDA. Grafy času stráveného tréninkem neurální sítě pro každou číslici jsou uvedeny. Je prezentováno srovnání času stráveného sériovým a paralelním tréninkem. Bylo zjištěno, že průměrná hodnota zrychlení výcviku neurální sítě pomocí technologie CUDA je téměř 17krát vyšší. Číslice ze zkušebního vzorku databáze MNIST se používají k vyhodnocení přesnosti stavby seskupení. Bylo zjištěno, že procento vektorů ze zkušebního vzorku ve správném seskupení pro každou číslici je více než 90%. Vypočítá se F-míra pro každou číslici. Nejlepší hodnoty F-measure jsou získány pro 0 a 1 (F-measure je 0.974), vzhledem k tomu, že nejhorší hodnoty jsou získány pro číslici 9 (F-measure je 0.903). Úvod stručně popisuje obsah práce, jaký výzkum je v současné době k dispozici, a význam této práce. Po tom následuje prohlášení o problému, stejně jako o tom, jaké technologie byly použity k psaní této práce. První kapitola popisuje teoretické aspekty, stejně jako popisuje, jak řešit každou fázi této práce. Druhá kapitola obsahuje popis programu práce a získané výsledky. Ve druhé kapitole mluvíme o paralelizaci výukového algoritmu Kohonenovy neurální sítě. Ve třetí kapitole je software testován. Výsledky jsou uznání reakci každé neuronové sítě - obraz je nejvíce podobný obraz předložené pro vstup, a také celkové procento uznání za každé neuronové sítě.In the presented work, a Hopfield neural network was constructed for recognizing handwritten digit patterns contained in the MNIST database. Ten Hopfield neural networks were built for each digit separately. The centers of clusters that were built using the Kohonen neural network were taken as objects for “memorization”. Two methods were proposed, which are a supported step in a Hopfield neural network; an analysis of these methods was carried out. Also, an error was calculated for each method, the pros and cons of their use were identified. Clustering of handwritten digits from the training sample of the MNIST database is conducted. Clustering is performed using a Kohonen neural network. The optimal number of clusters (not exceeding 50) for each digit is selected. As a metric for Kohonen network, the Euclidean norm is used. The network is trained by a serial algorithm on the CPU and by a parallel algorithm on the GPU using CUDA technology. The graphs of the time spent on training the neural network for each digit are given. A comparison of the time spent for serial and parallel training is presented. It is found that the average value of accelerating the training of a neural network using CUDA technology is almost 17-fold. The digits from the test sample of the MNIST database are used to evaluate the accuracy of building the cluster. It is found that the percentage of vectors from the test sample in the correct cluster for each digit is more than 90%. The F-measure for each digit is calculated. The best values of the F-measure are obtained for 0 and 1 (F-measure is 0.974), whereas the worst values are obtained for the digit 9 (F-measure is 0.903). The introduction briefly describes the content of the work, what research is currently available, and the relevance of this work. This is followed by a statement of the problem, as well as what technologies were used to write this work. The first chapter describes the theoretical aspects, as well as describes how to solve each stage of this work. The second chapter contains a program description of the work and the results obtained. In the second chapter, we talk about parallelizing the learning algorithm of the Kohonen neural network. In the third chapter, the software is tested. The results are the recognition response of each neural network - the image is the most similar to the image submitted for input, also, the total percentage of recognition for each neural network

    ROC Curves within the Framework of Neural Network Assembly Memory Model: Some Analytic Results

    Get PDF
    On the basis of convolutional (Hamming) version of recent Neural Network Assembly Memory Model (NNAMM) for intact two-layer autoassociative Hopfield network optimal receiver operating characteristics (ROCs) have been derived analytically. A method of taking into account explicitly a priori probabilities of alternative hypotheses on the structure of information initiating memory trace retrieval and modified ROCs (mROCs, a posteriori probabilities of correct recall vs. false alarm probability) are introduced. The comparison of empirical and calculated ROCs (or mROCs) demonstrates that they coincide quantitatively and in this way intensities of cues used in appropriate experiments may be estimated. It has been found that basic ROC properties which are one of experimental findings underpinning dual-process models of recognition memory can be explained within our one-factor NNAMM

    Financial distress prediction using the hybrid associative memory with translation

    Get PDF
    This paper presents an alternative technique for financial distress prediction systems. The method is based on a type of neural network, which is called hybrid associative memory with translation. While many different neural network architectures have successfully been used to predict credit risk and corporate failure, the power of associative memories for financial decision-making has not been explored in any depth as yet. The performance of the hybrid associative memory with translation is compared to four traditional neural networks, a support vector machine and a logistic regression model in terms of their prediction capabilities. The experimental results over nine real-life data sets show that the associative memory here proposed constitutes an appropriate solution for bankruptcy and credit risk prediction, performing significantly better than the rest of models under class imbalance and data overlapping conditions in terms of the true positive rate and the geometric mean of true positive and true negative rates.This work has partially been supported by the Mexican CONACYT through the Postdoctoral Fellowship Program [232167], the Spanish Ministry of Economy [TIN2013-46522-P], the Generalitat Valenciana [PROMETEOII/2014/062] and the Mexican PRODEP [DSA/103.5/15/7004]. We would like to thank the Reviewers for their valuable comments and suggestions, which have helped to improve the quality of this paper substantially

    ROC Curves Within the Framework of Neural Network Assembly Memory Model: Some Analytic Results

    Get PDF
    On the basis of convolutional (Hamming) version of recent Neural Network Assembly Memory Model (NNAMM) for intact two-layer autoassociative Hopfield network optimal receiver operating characteristics (ROCs) have been derived analytically. A method of taking into account explicitly a priori probabilities of alternative hypotheses on the structure of information initiating memory trace retrieval and modified ROCs (mROCs, a posteriori probabilities of correct recall vs. false alarm probability) are introduced. The comparison of empirical and calculated ROCs (or mROCs) demonstrates that they coincide quantitatively and in this way intensities of cues used in appropriate experiments may be estimated. It has been found that basic ROC properties which are one of experimental findings underpinning dual-process models of recognition memory can be explained within our one-factor NNAMM.Comment: Proceedings of the KDS-2003 Conference held in Varna, Bulgaria on June 16-26, 2003, pages 138-146, 5 Figures, 18 reference

    Techniques of replica symmetry breaking and the storage problem of the McCulloch-Pitts neuron

    Full text link
    In this article the framework for Parisi's spontaneous replica symmetry breaking is reviewed, and subsequently applied to the example of the statistical mechanical description of the storage properties of a McCulloch-Pitts neuron. The technical details are reviewed extensively, with regard to the wide range of systems where the method may be applied. Parisi's partial differential equation and related differential equations are discussed, and a Green function technique introduced for the calculation of replica averages, the key to determining the averages of physical quantities. The ensuing graph rules involve only tree graphs, as appropriate for a mean-field-like model. The lowest order Ward-Takahashi identity is recovered analytically and is shown to lead to the Goldstone modes in continuous replica symmetry breaking phases. The need for a replica symmetry breaking theory in the storage problem of the neuron has arisen due to the thermodynamical instability of formerly given solutions. Variational forms for the neuron's free energy are derived in terms of the order parameter function x(q), for different prior distribution of synapses. Analytically in the high temperature limit and numerically in generic cases various phases are identified, among them one similar to the Parisi phase in the Sherrington-Kirkpatrick model. Extensive quantities like the error per pattern change slightly with respect to the known unstable solutions, but there is a significant difference in the distribution of non-extensive quantities like the synaptic overlaps and the pattern storage stability parameter. A simulation result is also reviewed and compared to the prediction of the theory.Comment: 103 Latex pages (with REVTeX 3.0), including 15 figures (ps, epsi, eepic), accepted for Physics Report

    Techniques of replica symmetry breaking and the storage problem of the McCulloch-Pitts neuron

    Full text link
    In this article the framework for Parisi's spontaneous replica symmetry breaking is reviewed, and subsequently applied to the example of the statistical mechanical description of the storage properties of a McCulloch-Pitts neuron. The technical details are reviewed extensively, with regard to the wide range of systems where the method may be applied. Parisi's partial differential equation and related differential equations are discussed, and a Green function technique introduced for the calculation of replica averages, the key to determining the averages of physical quantities. The ensuing graph rules involve only tree graphs, as appropriate for a mean-field-like model. The lowest order Ward-Takahashi identity is recovered analytically and is shown to lead to the Goldstone modes in continuous replica symmetry breaking phases. The need for a replica symmetry breaking theory in the storage problem of the neuron has arisen due to the thermodynamical instability of formerly given solutions. Variational forms for the neuron's free energy are derived in terms of the order parameter function x(q), for different prior distribution of synapses. Analytically in the high temperature limit and numerically in generic cases various phases are identified, among them one similar to the Parisi phase in the Sherrington-Kirkpatrick model. Extensive quantities like the error per pattern change slightly with respect to the known unstable solutions, but there is a significant difference in the distribution of non-extensive quantities like the synaptic overlaps and the pattern storage stability parameter. A simulation result is also reviewed and compared to the prediction of the theory.Comment: 103 Latex pages (with REVTeX 3.0), including 15 figures (ps, epsi, eepic), accepted for Physics Report
    • …
    corecore