348 research outputs found

    Associative neural networks: properties, learning, and applications.

    Get PDF
    by Chi-sing Leung.Thesis (Ph.D.)--Chinese University of Hong Kong, 1994.Includes bibliographical references (leaves 236-244).Chapter 1 --- Introduction --- p.1Chapter 1.1 --- Background of Associative Neural Networks --- p.1Chapter 1.2 --- A Distributed Encoding Model: Bidirectional Associative Memory --- p.3Chapter 1.3 --- A Direct Encoding Model: Kohonen Map --- p.6Chapter 1.4 --- Scope and Organization --- p.9Chapter 1.5 --- Summary of Publications --- p.13Chapter I --- Bidirectional Associative Memory: Statistical Proper- ties and Learning --- p.17Chapter 2 --- Introduction to Bidirectional Associative Memory --- p.18Chapter 2.1 --- Bidirectional Associative Memory and its Encoding Method --- p.18Chapter 2.2 --- Recall Process of BAM --- p.20Chapter 2.3 --- Stability of BAM --- p.22Chapter 2.4 --- Memory Capacity of BAM --- p.24Chapter 2.5 --- Error Correction Capability of BAM --- p.28Chapter 2.6 --- Chapter Summary --- p.29Chapter 3 --- Memory Capacity and Statistical Dynamics of First Order BAM --- p.31Chapter 3.1 --- Introduction --- p.31Chapter 3.2 --- Existence of Energy Barrier --- p.34Chapter 3.3 --- Memory Capacity from Energy Barrier --- p.44Chapter 3.4 --- Confidence Dynamics --- p.49Chapter 3.5 --- Numerical Results from the Dynamics --- p.63Chapter 3.6 --- Chapter Summary --- p.68Chapter 4 --- Stability and Statistical Dynamics of Second order BAM --- p.70Chapter 4.1 --- Introduction --- p.70Chapter 4.2 --- Second order BAM and its Stability --- p.71Chapter 4.3 --- Confidence Dynamics of Second Order BAM --- p.75Chapter 4.4 --- Numerical Results --- p.82Chapter 4.5 --- Extension to higher order BAM --- p.90Chapter 4.6 --- Verification of the conditions of Newman's Lemma --- p.94Chapter 4.7 --- Chapter Summary --- p.95Chapter 5 --- Enhancement of BAM --- p.97Chapter 5.1 --- Background --- p.97Chapter 5.2 --- Review on Modifications of BAM --- p.101Chapter 5.2.1 --- Change of the encoding method --- p.101Chapter 5.2.2 --- Change of the topology --- p.105Chapter 5.3 --- Householder Encoding Algorithm --- p.107Chapter 5.3.1 --- Construction from Householder Transforms --- p.107Chapter 5.3.2 --- Construction from iterative method --- p.109Chapter 5.3.3 --- Remarks on HCA --- p.111Chapter 5.4 --- Enhanced Householder Encoding Algorithm --- p.112Chapter 5.4.1 --- Construction of EHCA --- p.112Chapter 5.4.2 --- Remarks on EHCA --- p.114Chapter 5.5 --- Bidirectional Learning --- p.115Chapter 5.5.1 --- Construction of BL --- p.115Chapter 5.5.2 --- The Convergence of BL and the memory capacity of BL --- p.116Chapter 5.5.3 --- Remarks on BL --- p.120Chapter 5.6 --- Adaptive Ho-Kashyap Bidirectional Learning --- p.121Chapter 5.6.1 --- Construction of AHKBL --- p.121Chapter 5.6.2 --- Convergent Conditions for AHKBL --- p.124Chapter 5.6.3 --- Remarks on AHKBL --- p.125Chapter 5.7 --- Computer Simulations --- p.126Chapter 5.7.1 --- Memory Capacity --- p.126Chapter 5.7.2 --- Error Correction Capability --- p.130Chapter 5.7.3 --- Learning Speed --- p.157Chapter 5.8 --- Chapter Summary --- p.158Chapter 6 --- BAM under Forgetting Learning --- p.160Chapter 6.1 --- Introduction --- p.160Chapter 6.2 --- Properties of Forgetting Learning --- p.162Chapter 6.3 --- Computer Simulations --- p.168Chapter 6.4 --- Chapter Summary --- p.168Chapter II --- Kohonen Map: Applications in Data compression and Communications --- p.170Chapter 7 --- Introduction to Vector Quantization and Kohonen Map --- p.171Chapter 7.1 --- Background on Vector quantization --- p.171Chapter 7.2 --- Introduction to LBG algorithm --- p.173Chapter 7.3 --- Introduction to Kohonen Map --- p.174Chapter 7.4 --- Chapter Summary --- p.179Chapter 8 --- Applications of Kohonen Map in Data Compression and Communi- cations --- p.181Chapter 8.1 --- Use Kohonen Map to design Trellis Coded Vector Quantizer --- p.182Chapter 8.1.1 --- Trellis Coded Vector Quantizer --- p.182Chapter 8.1.2 --- Trellis Coded Kohonen Map --- p.188Chapter 8.1.3 --- Computer Simulations --- p.191Chapter 8.2 --- Kohonen MapiCombined Vector Quantization and Modulation --- p.195Chapter 8.2.1 --- Impulsive Noise in the received data --- p.195Chapter 8.2.2 --- Combined Kohonen Map and Modulation --- p.198Chapter 8.2.3 --- Computer Simulations --- p.200Chapter 8.3 --- Error Control Scheme for the Transmission of Vector Quantized Data --- p.213Chapter 8.3.1 --- Motivation and Background --- p.214Chapter 8.3.2 --- Trellis Coded Modulation --- p.216Chapter 8.3.3 --- "Combined Vector Quantization, Error Control, and Modulation" --- p.220Chapter 8.3.4 --- Computer Simulations --- p.223Chapter 8.4 --- Chapter Summary --- p.226Chapter 9 --- Conclusion --- p.232Bibliography --- p.23

    Multi-level Architecture of Experience-based Neural Representations

    Get PDF

    The stability and attractivity of neural associative memories.

    Get PDF
    Han-bing Ji.Thesis (Ph.D.)--Chinese University of Hong Kong, 1996.Includes bibliographical references (p. 160-163).Microfiche. Ann Arbor, Mich.: UMI, 1998. 2 microfiches ; 11 x 15 cm

    Interactive and life-long learning for identification and categorization tasks

    Get PDF
    Abstract (engl.) This thesis focuses on life-long and interactive learning for recognition tasks. To achieve these targets the separation into a short-term memory (STM) and a long-term memory (LTM) is proposed. For the incremental build up of the STM a similarity-based one-shot learning method was developed. Furthermore two consolidation algorithms were proposed enabling the incremental learning of LTM representations. Based on the Learning Vector Quantization (LVQ) network architecture an error-based node insertion rule and a node dependent learning rate are proposed to enable life-long learning. For learning of categories additionally a forward-feature selection method was introduced to separate co-occurring categories. In experiments the performance of these learning methods could be shown for difficult visual recognition problems

    The Role of Synaptic Tagging and Capture for Memory Dynamics in Spiking Neural Networks

    Get PDF
    Memory serves to process and store information about experiences such that this information can be used in future situations. The transfer from transient storage into long-term memory, which retains information for hours, days, and even years, is called consolidation. In brains, information is primarily stored via alteration of synapses, so-called synaptic plasticity. While these changes are at first in a transient early phase, they can be transferred to a late phase, meaning that they become stabilized over the course of several hours. This stabilization has been explained by so-called synaptic tagging and capture (STC) mechanisms. To store and recall memory representations, emergent dynamics arise from the synaptic structure of recurrent networks of neurons. This happens through so-called cell assemblies, which feature particularly strong synapses. It has been proposed that the stabilization of such cell assemblies by STC corresponds to so-called synaptic consolidation, which is observed in humans and other animals in the first hours after acquiring a new memory. The exact connection between the physiological mechanisms of STC and memory consolidation remains, however, unclear. It is equally unknown which influence STC mechanisms exert on further cognitive functions that guide behavior. On timescales of minutes to hours (that means, the timescales of STC) such functions include memory improvement, modification of memories, interference and enhancement of similar memories, and transient priming of certain memories. Thus, diverse memory dynamics may be linked to STC, which can be investigated by employing theoretical methods based on experimental data from the neuronal and the behavioral level. In this thesis, we present a theoretical model of STC-based memory consolidation in recurrent networks of spiking neurons, which are particularly suited to reproduce biologically realistic dynamics. Furthermore, we combine the STC mechanisms with calcium dynamics, which have been found to guide the major processes of early-phase synaptic plasticity in vivo. In three included research articles as well as additional sections, we develop this model and investigate how it can account for a variety of behavioral effects. We find that the model enables the robust implementation of the cognitive memory functions mentioned above. The main steps to this are: 1. demonstrating the formation, consolidation, and improvement of memories represented by cell assemblies, 2. showing that neuromodulator-dependent STC can retroactively control whether information is stored in a temporal or rate-based neural code, and 3. examining interaction of multiple cell assemblies with transient and attractor dynamics in different organizational paradigms. In summary, we demonstrate several ways by which STC controls the late-phase synaptic structure of cell assemblies. Linking these structures to functional dynamics, we show that our STC-based model implements functionality that can be related to long-term memory. Thereby, we provide a basis for the mechanistic explanation of various neuropsychological effects.2021-09-0

    Reaching Performance in Heathy Individuals and Stroke Survivors Improves after Practice with Vibrotactile State Feedback

    Get PDF
    Stroke causes deficits of cognition, motor, and/or somatosensory functions. These deficits degrade the capability to perform activities of daily living (ADLs). Many research investigations have focused on mitigating the motor deficits of stroke through motor rehabilitation. However, somatosensory deficits are common and may contribute importantly to impairments in the control of functional arm movement. This dissertation advances the goal of promoting functional motor recovery after stroke by investigating the use of a vibrotactile feedback (VTF) body-machine interface (BMI). The VTF BMI is intended to improve control of the contralesional arm of stroke survivors by delivering supplemental limb-state feedback to the ipsilesional arm, where somatosensory feedback remains intact. To develop and utilize a VTF BMI, we first investigated how vibrotactile stimuli delivered on the arm are perceived and discriminated. We determined that stimuli are better perceived sequentially than those delivered simultaneously. Such stimuli can propagate up to 8 cm from the delivery site, so future applications should consider adequate spacing between stimulation sites. We applied these findings to create a multi-channel VTF interface to guide the arm in the absence of vision. In healthy people, we found that short-term practice, less than 2.5 hrs, allows for small improvements in the accuracy of horizontal planar reaching. Long-term practice, about 10 hrs, engages motor learning such that the accuracy and efficiency of reaching is improved and cognitive loading of VTF-guided reaching is reduced. During practice, participants adopted a movement strategy whereby BMI feedback changed in just one channel at a time. From this observation, we sought to develop a practice paradigm that might improve stroke survivors’ learning of VTF-guided reaching without vision. We investigated the effects of practice methods (whole practice vs part practice) in stroke survivors’ capability to make VTF-guided arm movements. Stroke survivors were able to improve the accuracy of VTF-guided reaching with practice, however there was no inherent differences between practice methods. In conclusion, practice on VTF-guided 2D reaching can be used by healthy people and stroke survivors. Future studies should investigate long-term practice in stroke survivors and their capability to use VTF BMIs to improve performance of unconstrained actions, including ADLs

    Implementation of neural networks as CMOS integrated circuits

    Get PDF

    Mémoire et connectivité corticale

    Get PDF
    The central nervous system is able to memorize percepts on long time scales (long-term memory), as well as actively maintain these percepts in memory for a few seconds in order to perform behavioral tasks (working memory). These two phenomena can be studied together in the framework of the attractor neural network theory. In this framework, a percept, represented by a pattern of neural activity, is stored as a long-term memory and can be loaded in working memory if the network is able to maintain, in a stable and autonomous manner, this pattern of activity. Such a dynamics is made possible by the specific form of the connectivity of the network. Here we examine models of cortical connectivity at different scales, in order to study which cortical circuits can efficiently sustain attractor neural network dynamics. This is done by showing how the performance of theoretical models, quantified by the networks storage capacity (number of percepts it is possible to store), depends on the characteristics of the connectivity. In the first part we study fully-connected networks, where potentially each neuron connects to all the other neurons in the network. This situation models cortical columns whose radius is of the order of a few hundred microns. We first compute the storage capacity of networks whose synapses are described by binary variables that are modified in a stochastic manner when patterns of activity are imposed on the network. We generalize this study to the case in which synapses can be in K discrete states, which, for instance, allows to model the fact that two neighboring pyramidal cells in cortex touches each others at multiple contact points. In the second part, we study modular networks where each module is a fully-connected network and connections between modules are diluted. We show how the storage capacity depends on the connectivity between modules and on the organization of the patterns of activity to store. The comparison with experimental measurements of large-scale connectivity suggests that these connections can implement an attractor neural network at the scale of multiple cortical areas. Finally, we study a network in which units are connected by weights whose amplitude has a cost that depends on the distance between the units. We use a Gardner's approach to compute the distribution of weights that optimizes storage in this network. We interpret each unit of this network as a cortical area and compare the obtained theoretical weights distribution with measures of connectivity between cortical areas.Le système nerveux central est capable de mémoriser des percepts sur de longues échelles de temps (mémoire à long terme), ainsi que de maintenir activement ces percepts en mémoire pour quelques secondes en vue d’effectuer des tâches comportementales (mémoire de travail). Ces deux phénomènes peuvent être étudiés conjointement dans le cadre de la théorie des réseaux de neurones à attracteurs. Dans ce cadre, un percept, représenté par un patron d’activité neuronale, est stocké en mémoire à long terme et peut être chargé en mémoire de travail à condition que le réseau soit capable de maintenir de manière stable et autonome ce patron d’activité. Une telle dynamique est rendue possible par la forme spécifique de la connectivité du réseau. Ici on examine des modèles de connectivité corticale à différentes échelles, dans le but d’étudier quels circuits corticaux peuvent soutenir efficacement des dynamiques de type réseau à attracteurs. Ceci est fait en montrant comment les performances de modèles théoriques, quantifiées par la capacité de stockage des réseaux (nombre de percepts qu’il est possible de stocker, puis réutiliser), dépendent des caractéristiques de la connectivité. Une première partie est dédiée à l’étude de réseaux complètement connectés où un neurone peut potentiellement être connecté à chacun des autres neurones du réseau. Cette situation modélise des colonnes corticales dont le rayon est de l’ordre de quelques centaines de microns. On s’intéresse d’abord à la capacité de stockage de réseaux où les synapses entre neurones sont décrites par des variables binaires, modifiées de manière stochastique lorsque des patrons d’activité sont imposés sur le réseau. On étend cette étude à des cas où les synapses peuvent être dans K états discrets, ce qui, par exemple, permet de modéliser le fait que les connections entre deux cellules pyramidales voisines du cortex sont connectées par l’intermédiaire de plusieurs contacts synaptiques. Dans un second temps, on étudie des réseaux modulaires où chaque module est un réseau complètement connecté et où la connectivité entre modules est diluée. On montre comment la capacité de stockage dépend de la connectivité entre modules et de l’organisation des patrons d’activité à stocker. La comparaison avec les mesures expérimentales sur la connectivité à grande échelle du cortex permet de montrer que ces connections peuvent implémenter un réseau à attracteur à l’échelle de plusieurs aires cérébrales. Enfin on étudie un réseau dont les unités sont connectées par des poids dont l’amplitude a un coût qui dépend de la distance entre unités. On utilise une approche à la Gardner pour calculer la distribution des poids qui optimise le stockage de patrons d’activité dans ce réseau. On interprète chaque unité de ce réseau comme une aire cérébrale et on compare la distribution des poids obtenue théoriquement avec des mesures expérimentales de connectivité entre aires cérébrales
    • …
    corecore