989 research outputs found

    Improving Associative Memory in a Network of Spiking Neurons

    Get PDF
    In this thesis we use computational neural network models to examine the dynamics and functionality of the CA3 region of the mammalian hippocampus. The emphasis of the project is to investigate how the dynamic control structures provided by inhibitory circuitry and cellular modification may effect the CA3 region during the recall of previously stored information. The CA3 region is commonly thought to work as a recurrent auto-associative neural network due to the neurophysiological characteristics found, such as, recurrent collaterals, strong and sparse synapses from external inputs and plasticity between coactive cells. Associative memory models have been developed using various configurations of mathematical artificial neural networks which were first developed over 40 years ago. Within these models we can store information via changes in the strength of connections between simplified model neurons (two-state). These memories can be recalled when a cue (noisy or partial) is instantiated upon the net. The type of information they can store is quite limited due to restrictions caused by the simplicity of the hard-limiting nodes which are commonly associated with a binary activation threshold. We build a much more biologically plausible model with complex spiking cell models and with realistic synaptic properties between cells. This model is based upon some of the many details we now know of the neuronal circuitry of the CA3 region. We implemented the model in computer software using Neuron and Matlab and tested it by running simulations of storage and recall in the network. By building this model we gain new insights into how different types of neurons, and the complex circuits they form, actually work. The mammalian brain consists of complex resistive-capacative electrical circuitry which is formed by the interconnection of large numbers of neurons. A principal cell type is the pyramidal cell within the cortex, which is the main information processor in our neural networks. Pyramidal cells are surrounded by diverse populations of interneurons which have proportionally smaller numbers compared to the pyramidal cells and these form connections with pyramidal cells and other inhibitory cells. By building detailed computational models of recurrent neural circuitry we explore how these microcircuits of interneurons control the flow of information through pyramidal cells and regulate the efficacy of the network. We also explore the effect of cellular modification due to neuronal activity and the effect of incorporating spatially dependent connectivity on the network during recall of previously stored information. In particular we implement a spiking neural network proposed by Sommer and Wennekers (2001). We consider methods for improving associative memory recall using methods inspired by the work by Graham and Willshaw (1995) where they apply mathematical transforms to an artificial neural network to improve the recall quality within the network. The networks tested contain either 100 or 1000 pyramidal cells with 10% connectivity applied and a partial cue instantiated, and with a global pseudo-inhibition.We investigate three methods. Firstly, applying localised disynaptic inhibition which will proportionalise the excitatory post synaptic potentials and provide a fast acting reversal potential which should help to reduce the variability in signal propagation between cells and provide further inhibition to help synchronise the network activity. Secondly, implementing a persistent sodium channel to the cell body which will act to non-linearise the activation threshold where after a given membrane potential the amplitude of the excitatory postsynaptic potential (EPSP) is boosted to push cells which receive slightly more excitation (most likely high units) over the firing threshold. Finally, implementing spatial characteristics of the dendritic tree will allow a greater probability of a modified synapse existing after 10% random connectivity has been applied throughout the network. We apply spatial characteristics by scaling the conductance weights of excitatory synapses which simulate the loss in potential in synapses found in the outer dendritic regions due to increased resistance. To further increase the biological plausibility of the network we remove the pseudo-inhibition and apply realistic basket cell models with differing configurations for a global inhibitory circuit. The networks are configured with; 1 single basket cell providing feedback inhibition, 10% basket cells providing feedback inhibition where 10 pyramidal cells connect to each basket cell and finally, 100% basket cells providing feedback inhibition. These networks are compared and contrasted for efficacy on recall quality and the effect on the network behaviour. We have found promising results from applying biologically plausible recall strategies and network configurations which suggests the role of inhibition and cellular dynamics are pivotal in learning and memory

    Brain-Inspired Spatio-Temporal Associative Memories for Neuroimaging Data Classification: EEG and fMRI

    Get PDF
    Humans learn from a lot of information sources to make decisions. Once this information is learned in the brain, spatio-temporal associations are made, connecting all these sources (variables) in space and time represented as brain connectivity. In reality, to make a decision, we usually have only part of the information, either as a limited number of variables, limited time to make the decision, or both. The brain functions as a spatio-temporal associative memory. Inspired by the ability of the human brain, a brain-inspired spatio-temporal associative memory was proposed earlier that utilized the NeuCube brain-inspired spiking neural network framework. Here we applied the STAM framework to develop STAM for neuroimaging data, on the cases of EEG and fMRI, resulting in STAM-EEG and STAM-fMRI. This paper showed that once a NeuCube STAM classification model was trained on a complete spatio-temporal EEG or fMRI data, it could be recalled using only part of the time series, or/and only part of the used variables. We evaluated both temporal and spatial association and generalization accuracy accordingly. This was a pilot study that opens the field for the development of classification systems on other neuroimaging data, such as longitudinal MRI data, trained on complete data but recalled on partial data. Future research includes STAM that will work on data, collected across different settings, in different labs and clinics, that may vary in terms of the variables and time of data collection, along with other parameters. The proposed STAM will be further investigated for early diagnosis and prognosis of brain conditions and for diagnostic/prognostic marker discovery

    Benchmarking Hebbian learning rules for associative memory

    Full text link
    Associative memory or content addressable memory is an important component function in computer science and information processing and is a key concept in cognitive and computational brain science. Many different neural network architectures and learning rules have been proposed to model associative memory of the brain while investigating key functions like pattern completion and rivalry, noise reduction, and storage capacity. A less investigated but important function is prototype extraction where the training set comprises pattern instances generated by distorting prototype patterns and the task of the trained network is to recall the correct prototype pattern given a new instance. In this paper we characterize these different aspects of associative memory performance and benchmark six different learning rules on storage capacity and prototype extraction. We consider only models with Hebbian plasticity that operate on sparse distributed representations with unit activities in the interval [0,1]. We evaluate both non-modular and modular network architectures and compare performance when trained and tested on different kinds of sparse random binary pattern sets, including correlated ones. We show that covariance learning has a robust but low storage capacity under these conditions and that the Bayesian Confidence Propagation learning rule (BCPNN) is superior with a good margin in all cases except one, reaching a three times higher composite score than the second best learning rule tested.Comment: 24 pages, 9 figure

    ์„ฑ๊ณต๊ธฐ์–ต์—์„œ์˜ ํ•ด๋งˆ์˜ ํŠน์ง•์  ๋‡Œ ๊ธฐ์ „

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ์ž์—ฐ๊ณผํ•™๋Œ€ํ•™ ๋‡Œ์ธ์ง€๊ณผํ•™๊ณผ, 2020. 8. ์ •์ฒœ๊ธฐ.One of the most intriguing of the human brain's complex functions is the ability to store information provided by experience and to retrieve much of it at will. This capability of memory processing is critical to humans survival โ€“ that is, humans guide their actions based on a given stimulus (e.g., item) in an environment, and can do so even when the stimulus is no longer present owing to the memory of the stimulus. A fundamental question of memory is why some experiences are remembered whereas others are forgotten. Since Scoville and Milners characterization of patient H.M., who demonstrated severe recognition memory deficits following damage to the medial temporal lobe (MTL), the hippocampus has been extensively studied as one of the key neural substrates for memory. In line with this, several experiments have been conducted on exploring the roles of the hippocampus in various ways. One is confirming the causality of the hippocampus in the memory process using direct electrical stimulation to the hippocampal region. The other is investigating the neural correlates of hippocampus using intracranial electroencephalography (iEEG) field potential and single neurons action potential known as spike recorded directly from the hippocampus. The present thesis is focused on providing direct electrophysiological evidence of human hippocampus in episodic memory that may help fill the gap that remained in the field for several years. Here, I will show how direct hippocampal stimulation affect human behavior and present characterized neural correlates of successful memory in the hippocampus. In the first study, building on the previous findings on the hippocampus, I sought to address whether the hippocampus would show functional causality with memory tasks and elicit different neural characteristics depending on memory tasks applied. I found hippocampal stimulation modulated memory performance in a task-dependent manner, improving associative memory performance, while impairing item memory performance. These results of the task-specific memory modulation suggest that the associative task elicited stronger theta oscillations than the single-item task. In the second study, I tested whether successful memory formation relies on the hippocampal neuronal activity that engaged preceding an event. I found that hippocampal pre-stimulus spiking activity (elicited by a cue presented just before a word) predicted subsequent memory. Stimulus activity during encoding (during-stimulus) also showed a trend of predicting subsequent memory but was simply a continuation of pre-stimulus activity. These findings indicate that successful memory formation in human is predicted by a pre-stimulus activity and suggests that the preparatory mobilization of neural processes before encoding benefits episodic memory performance. Throughout the study, the current finding suggests the possibility that the intervals of poor memory encoding can be identified even before the stimulus presented and may be rescued with targeted stimulation to the hippocampus even before the stimulus presented.์ธ๊ฐ„์˜ ๋ณต์žกํ•œ ๋‡Œ ๊ธฐ๋Šฅ ์ค‘ ํฅ๋ฏธ๋กœ์šด ํ•˜๋‚˜๋Š” ๊ฒฝํ—˜์— ์˜๊ฑฐํ•˜์—ฌ ์ •๋ณด๋ฅผ ์ €์žฅํ•˜๊ณ  ์˜์ง€์— ๋”ฐ๋ผ ์ €์žฅ๋œ ์ •๋ณด๋ฅผ ์žฌ์ธํ•˜๋Š” ๊ธฐ์–ต ๋Šฅ๋ ฅ ์ด๋‹ค. ์ธ๊ฐ„์€ ์ฃผ์–ด์ง„ ์ž๊ทน์— ๊ธฐ๋ฐ˜ํ•˜์—ฌ ํ–‰๋™์„ ์ •ํ•˜๋ฉฐ ์‹ฌ์ง€์–ด ์ž๊ทน์ด ์—†๋Š” ์ƒํ™ฉ์—์„œ๋„ ์ž๊ทน์— ๋Œ€ํ•œ ๊ธฐ์–ต์„ ๋ฐ”ํƒ•์œผ๋กœ ํ–‰๋™์„ ๊ฒฐ์ •ํ•˜๊ธฐ ๋•Œ๋ฌธ์— ๊ธฐ์–ต ๋Šฅ๋ ฅ์€ ์ƒ์กด์— ์žˆ์–ด ๋งค์šฐ ๊ฒฐ์ •์ ์ด๋ฉฐ, ์ด๋Ÿฌํ•œ ๊ธฐ์–ต๊ณผ ๊ด€๋ จ๋œ ๊ฐ€์žฅ ๊ธฐ๋ณธ์ ์ธ ์งˆ๋ฌธ์€ ๊ธฐ์–ต์˜ ์ €์žฅ ๋ฉ”์ปค๋‹ˆ์ฆ˜, ์ฆ‰, ์–ด๋–ค ๊ธฐ์–ต์€ ์ €์žฅ์ด ๋˜๊ณ  ์–ด๋–ค ๊ธฐ์–ต์€ ์žŠํ˜€์ง€๋Š” ๊ฐ€์ผ ๊ฒƒ์ด๋‹ค. ์Šค์ฝ”๋นŒ๊ณผ ๋ฐ€๋„ˆ๊ฐ€ ์ฒ˜์Œ ๋ณด๊ณ ํ•œ ๊ธฐ์–ต์ƒ์‹ค์ฆ ํ™˜์ž H.M.์€ ์ธก๋‘์˜์—ญ์˜ ์†์ƒ์„ ์ž…์€ ํ›„ ์‹ฌ๊ฐํ•œ ์ธ์ง€ ๊ธฐ์–ต ๋Šฅ๋ ฅ์˜ ์žฅ์• ๋ฅผ ๋ณด์˜€๊ณ , ์ดํ›„ ์‚ฌ๋žŒ ๋‡Œ์˜ ํ•ด๋งˆ ์˜์—ญ์€ ๊ธฐ์–ต์„ ๊ด€์žฅํ•˜๋Š” ๋‡Œ์˜ ์ค‘์š”ํ•œ ์˜์—ญ ์ค‘ ํ•˜๋‚˜๋กœ ๋„๋ฆฌ ์—ฐ๊ตฌ๋˜์—ˆ๋‹ค. ํ•ด๋งˆ๊ฐ€ ๊ธฐ์–ต์— ๋ฏธ์น˜๋Š” ์˜ํ–ฅ๊ณผ ์—ญํ• ์— ๋Œ€ํ•ด์„œ๋Š” ๋‹ค์–‘ํ•œ ๋ฐฉ๋ฒ•์œผ๋กœ ์‹คํ—˜์ด ์ง„ํ–‰๋˜์–ด ์™”๋‹ค. ๊ทธ ์ค‘์˜ ํ•˜๋‚˜๋Š” ๋‡Œ์— ์ง์ ‘์ ์ธ ์ „๊ธฐ์ž๊ทน์„ ๊ฐ€ํ•ด ๊ธฐ์–ต ๊ณผ์ • ์ค‘ ํ•ด๋งˆ์˜ ์—ญํ• ์„ ํ™•์ธํ•˜๋Š” ๋ฐฉ๋ฒ•์ธ๋ฐ, ์ด๋Š” ๋‡Œ์ „์ฆ ํ™˜์ž์˜ ๋ชจ๋ธ์„ ํ†ตํ•ด ์‚ฌ๋žŒ์˜ ๋‡Œ์— ์ ‘๊ทผ์ด ๊ฐ€๋Šฅํ•ด์ง€๋ฉด์„œ ์ด๋ฃจ์–ด์ ธ ์™”๋‹ค. ๋‘ ๋ฒˆ์งธ ๋ฐฉ๋ฒ•์€ ์ „๊ธฐ์ƒ๋ฆฌํ•™์  ๋ฐฉ๋ฒ•์„ ํ†ตํ•˜๋Š” ๊ฒƒ์ธ๋ฐ ์„ธํฌ ์™ธ ํ™œ๋™ ์ „์œ„์ธ ์ŠคํŒŒ์ดํฌ๋ฅผ ํ†ตํ•ด ์„ฑ๊ณต๊ธฐ์–ต์—์„œ์˜ ๋‰ด๋Ÿฐ์˜ ํ™œ๋™์„ฑ์„ ๋ฐํžˆ๋Š” ๊ฒƒ์ด๋‹ค. ์ด ๋…ผ๋ฌธ์€ ์ด ๋ถ„์•ผ์—์„œ ์˜ค๋žซ๋™์•ˆ ๋…ผ๋ž€์ด ๋˜์—ˆ๊ณ  ๋ถ€์กฑํ–ˆ๋˜ ์„ฑ๊ณต ๊ธฐ์–ต์— ๊ด€๋ จ๋œ ํ•ด๋งˆ์˜ ์—ญํ• ๊ณผ ๊ธฐ์ „์„ ๋ฌผ๋ฆฌ์  ์ž๊ทน ๋ฐ ์‹ ๊ฒฝ์„ธํฌ์˜ ์‹ ํ˜ธ๋ฅผ ์ธก์ •ํ•ด์„œ ์ „๊ธฐ์ƒ๋ฆฌํ•™์  ํŠน์„ฑ์„ ์ œ์‹œํ•˜๋Š”๋ฐ ์ดˆ์ ์„ ๋งž์ถ”๊ณ  ์žˆ๋‹ค. ๋…ผ๋ฌธ์—์„œ ๋ณธ ์ €์ž๋Š” ์‚ฌ๋žŒ์˜ ์„ฑ๊ณต๊ธฐ์–ตํ˜•์„ฑ๊ณผ ์žฌ์ธ์— ๋Œ€ํ•ด ๋‡Œ ์ž๊ทน๊ณผ ๋‹จ์œ„์„ธํฌํ™œ๋™์„ ๋ณด๊ณ ํ•  ๊ฒƒ์ด๋‹ค. ํ•ด๋งˆ์™€ ๊ธฐ์–ต์˜ ์ธ๊ณผ๊ด€๊ณ„ ๋ฐ ๊ธฐ์–ต ๊ณผ์ • ์ค‘์˜ ํ•ด๋งˆ์˜ ๋‡Œ ๊ธฐ์ „๊ณผ ๊ด€๋ จ๋œ ๊ธฐ์กด์˜ ์‹คํ—˜์ , ํ–‰๋™์  ๋ฐœ๊ฒฌ๋“ค์— ๊ทผ๊ฑฐํ•˜์—ฌ ๋ณธ ์ €์ž๋Š” (ใ„ฑ) ํ•ด๋งˆ์— ์ง์ ‘์ ์ธ ์ „๊ธฐ ์ž๊ทน์„ ์ฃผ๊ณ  ๊ธฐ์–ต ์ˆ˜ํ–‰๋Šฅ๋ ฅ์˜ ์ฐจ์ด ๋ฐ ๊ธฐ์–ต ๊ณผ์ œ์— ๋”ฐ๋ฅธ ํ•ด๋งˆ์˜ ์‹ ๊ฒฝ ๊ธฐ์ „์„ ๋ฐํžˆ๊ณ , (ใ„ด) ์„ฑ๊ณต ๊ธฐ์–ต์ด ํ˜•์„ฑ๋˜๋Š” ๊ณผ์ •์—์„œ ๋‚˜ํƒ€๋‚˜๋Š” ์‹ ๊ฒฝ์„ธํฌ์˜ ๋ฐœํ™” ํŒจํ„ด์˜ ํŠน์„ฑ์„ ์‚ดํŽด๋ณด์•˜๋‹ค. ๋ณธ ์—ฐ๊ตฌ๋ฅผ ํ†ตํ•ด ์ €์ž๋Š” ํ–ฅํ›„ ๊ธฐ์–ต์˜ ํ˜•์„ฑ ๊ณผ์ •์—์„œ, ์ž๊ทน์ด ์ œ์‹œ๋˜๋Š” ๊ตฌ๊ฐ„๋ฟ ๋งŒ ์•„๋‹ˆ๋ผ ์ž๊ทน์ด ์ฃผ์–ด์ง€๊ธฐ ์ „ ๋‹จ๊ณ„์—์„œ๋„ ํ•ด๋งˆ๋ฅผ ํƒ€๊นƒ ํ•˜์—ฌ ์ „๊ธฐ ์ž๊ทน์„ ์คŒ์œผ๋กœ์จ ๊ธฐ์–ต ์‹คํŒจ๋กœ ์ด์–ด์งˆ ์ˆ˜ ์žˆ๋Š” ์ž๊ทน์„ ์„ฑ๊ณต ๊ธฐ์–ต์œผ๋กœ ์ €์žฅํ•  ์ˆ˜ ์žˆ๋„๋ก ์œ ๋„ํ•  ์ˆ˜ ์žˆ์„ ๊ฒƒ์ด๋ผ ๊ธฐ๋Œ€ํ•œ๋‹ค.SECTION 1. INTRODUCTION 1 CHAPTER 1: Human Memory System 1 1.1. The hippocampus and memory 2 1.2. The structure of the hippocampus 3 CHAPTER 2: Human Memory Research: how to see a memory 4 2.1 Clinical rationale for invasive recordings with intracranial electrodes 4 2.2. Human intracranial EEG 6 2.3. Single unit activity recording and spike sorting in human 7 2.4. Direct brain stimulation study 9 CHAPTER 3: Human Memory Research: hippocampal activity for understanding successful memory formation 11 3.1. Functional role of human intracranial oscillatory activity in successful memory mechanism 11 3.1.1. Theta Oscillations 11 3.1.2. Gamma oscillations 13 3.2. Brain stimulation for memory enhancement 14 3.3. Single unit activity study in memory 15 CHAPTER 4: Purpose of the Present Study 17 SECTION 2. EXPERIMENTAL STUDY 19 CHAPTER 5: The importance of the hippocampal oscillatory activity for successful memory: direct brain stimulation study 19 5.1. Abstract 20 5.2. Introduction 22 5.3. Materials and Methods 25 5.3.1. Patients 25 5.3.2. Electrode localization 25 5.3.3. Memory task 29 5.3.4. Brain stimulation 30 5.3.5. Neuropsychological memory test 31 5.3.6. Analysis of memory performance and electrophysiological data 32 5.4. Results 37 5.4.1. Hippocampal stimulation improves associative memory but impairs item memory 37 5.4.2. Stimulation-induced memory enhancement is reflected in increased theta power during retrieval 38 5.4.3. Associative memory elicits higher theta power than item memory during encoding 42 5.4.4. Successful memory encoding elicits higher theta power in both memory task 44 5.4.5. Stimulation-mediated memory effect is greater in subject with poorer baseline cognitive function 46 5.5. Discussion 48 5.5.1. Summary 48 5.5.2. Task-dependent effects of hippocampal stimulation on memory 49 5.5.3. Theta activity as a neural signature for memory enhancement 51 5.5.4. Clinical implications 52 5.5.5. Limitations 54 5.5.6. Conclusion 55 CHAPTER 6: Hippocampal pre-stimulus activity predicts later memory success 57 6.1. Abstract 58 6.2. Introduction 59 6.3. Materials and Methods 62 6.3.1. Patients 62 6.3.2. Electrodes 63 6.3.3. Task and Stimuli 64 6.3.4. Electrophysiological recordings and Spike sorting 65 6.3.5. Analysis of iEEG field potentials 66 6.4. Results 68 6.4.1. Behavioral results 68 6.4.2. Spiking properties of hippocampal neurons 68 6.4.3. Hippocampal pre-stimulus activity correlates with successful memory 70 6.4.4. Hippocampal pre-stimulus spiking activity correlates with high gamma field potentials 74 6.5. Discussion 78 6.5.1. Summary 78 6.5.2. Comparison with previous findings 78 6.5.3. Possible mechanism underlying pre-stimulus activity 79 6.5.4. Conclusion 82 SECTION 3. GENERAL CONCLUSION 83 CHAPTER 7: General Conclusion and Perspective 83 Bibliography 84 Abstract in Korean (๊ตญ๋ฌธ์ดˆ๋ก) 93Docto

    Distinct Effects of Perceptual Quality on Auditory Word Recognition, Memory Formation and Recall in a Neural Model of Sequential Memory

    Get PDF
    Adults with sensory impairment, such as reduced hearing acuity, have impaired ability to recall identifiable words, even when their memory is otherwise normal. We hypothesize that poorer stimulus quality causes weaker activity in neurons responsive to the stimulus and more time to elapse between stimulus onset and identification. The weaker activity and increased delay to stimulus identification reduce the necessary strengthening of connections between neurons active before stimulus presentation and neurons active at the time of stimulus identification. We test our hypothesis through a biologically motivated computational model, which performs item recognition, memory formation and memory retrieval. In our simulations, spiking neurons are distributed into pools representing either items or context, in two separate, but connected winner-takes-all (WTA) networks. We include associative, Hebbian learning, by comparing multiple forms of spike-timing-dependent plasticity (STDP), which strengthen synapses between coactive neurons during stimulus identification. Synaptic strengthening by STDP can be sufficient to reactivate neurons during recall if their activity during a prior stimulus rose strongly and rapidly. We find that a single poor quality stimulus impairs recall of neighboring stimuli as well as the weak stimulus itself. We demonstrate that within the WTA paradigm of word recognition, reactivation of separate, connected sets of non-word, context cells permits reverse recall. Also, only with such coactive context cells, does slowing the rate of stimulus presentation increase recall probability. We conclude that significant temporal overlap of neural activity patterns, absent from individual WTA networks, is necessary to match behavioral data for word recall

    Structural Plasticity and Associative Memory in Balanced Neural Networks With Spike-Time Dependent Inhibitory Plasticity

    Get PDF
    Several homeostatic mechanisms enable the brain to maintain desired levels of neuronal activity. One of these, homeostatic structural plasticity, has been reported to restore activity in networks disrupted by peripheral lesions by altering their neuronal connectivity. While multiple lesion experiments have studied the changes in neurite morphology that underlie modifications of synapses in these networks, the underlying mechanisms that drive these changes and the effects of the altered connectivity on network function are yet to be explained. Experimental evidence suggests that neuronal activity modulates neurite morphology and that it may stimulate neurites to selectively sprout or retract to restore network activity levels. In this study, a new spiking network model was developed to investigate these activity dependent growth regimes of neurites. Simulations of the model accurately reproduce network rewiring after peripheral lesions as reported in experiments. To ensure that these simulations closely resembled the behaviour of networks in the brain, a biologically realistic network model that exhibits low frequency Asynchronous Irregular (AI) activity as observed in cerebral cortex was deafferented. Furthermore, to study the functional effects of peripheral lesioning and subsequent network repair by homeostatic structural plasticity, associative memories were stored in the network and their recall performances before deafferentation and after, during the repair process, were compared. The simulation results indicate that the re-establishment of activity in neurons both within and outside the deprived region, the Lesion Projection Zone (LPZ), requires opposite activity dependent growth rules for excitatory and inhibitory post-synaptic elements. Analysis of these growth regimes indicates that they also contribute to the maintenance of activity levels in individual neurons. In this model, the directional formation of synapses that is observed in experiments requires that pre-synaptic excitatory and inhibitory elements also follow opposite growth rules. Furthermore, it was observed that the proposed model of homeostatic structural plasticity and the inhibitory synaptic plasticity mechanism that also balances the AI network are both necessary for successful rewiring. Next, even though average activity was restored to deprived neurons, these neurons did not retain their AI firing characteristics after repair. Finally, the recall performance of associative memories, which deteriorated after deafferentation, was not restored after network reorganisation

    Neuromorphic Computing Applications in Robotics

    Get PDF
    Deep learning achieves remarkable success through training using massively labeled datasets. However, the high demands on the datasets impede the feasibility of deep learning in edge computing scenarios and suffer from the data scarcity issue. Rather than relying on labeled data, animals learn by interacting with their surroundings and memorizing the relationships between events and objects. This learning paradigm is referred to as associative learning. The successful implementation of associative learning imitates self-learning schemes analogous to animals which resolve the challenges of deep learning. Current state-of-the-art implementations of associative memory are limited to simulations with small-scale and offline paradigms. Thus, this work implements associative memory with an Unmanned Ground Vehicle (UGV) and neuromorphic hardware, specifically Intelโ€™s Loihi, for an online learning scenario. This system emulates the classic associative learning in rats using the UGV in place of the rats. In specific, it successfully reproduces the fear conditioning with no pretraining procedure or labeled datasets. The UGV is rendered capable of autonomously learning the cause-and-effect relationship of the light stimulus and vibration stimulus and exhibiting a movement response to demonstrate the memorization. Hebbian learning dynamics are used to update the synaptic weights during the associative learning process. The Intel Loihi chip is integrated with this online learning system for processing visual signals with a specialized neural assembly. While processing, the Loihiโ€™s average power usages for computing logic and memory are 30 mW and 29 mW, respectively

    The Performance of Associative Memory Models with Biologically Inspired Connectivity

    Get PDF
    This thesis is concerned with one important question in artificial neural networks, that is, how biologically inspired connectivity of a network affects its associative memory performance. In recent years, research on the mammalian cerebral cortex, which has the main responsibility for the associative memory function in the brains, suggests that the connectivity of this cortical network is far from fully connected, which is commonly assumed in traditional associative memory models. It is found to be a sparse network with interesting connectivity characteristics such as the โ€œsmall world networkโ€ characteristics, represented by short Mean Path Length, high Clustering Coefficient, and high Global and Local Efficiency. Most of the networks in this thesis are therefore sparsely connected. There is, however, no conclusive evidence of how these different connectivity characteristics affect the associative memory performance of a network. This thesis addresses this question using networks with different types of connectivity, which are inspired from biological evidences. The findings of this programme are unexpected and important. Results show that the performance of a non-spiking associative memory model is found to be predicted by its linear correlation with the Clustering Coefficient of the network, regardless of the detailed connectivity patterns. This is particularly important because the Clustering Coefficient is a static measure of one aspect of connectivity, whilst the associative memory performance reflects the result of a complex dynamic process. On the other hand, this research reveals that improvements in the performance of a network do not necessarily directly rely on an increase in the networkโ€™s wiring cost. Therefore it is possible to construct networks with high associative memory performance but relatively low wiring cost. Particularly, Gaussian distributed connectivity in a network is found to achieve the best performance with the lowest wiring cost, in all examined connectivity models. Our results from this programme also suggest that a modular network with an appropriate configuration of Gaussian distributed connectivity, both internal to each module and across modules, can perform nearly as well as the Gaussian distributed non-modular network. Finally, a comparison between non-spiking and spiking associative memory models suggests that in terms of associative memory performance, the implication of connectivity seems to transcend the details of the actual neural models, that is, whether they are spiking or non-spiking neurons

    Multi-modal association learning using spike-timing dependent plasticity (STDP)

    Get PDF
    We propose an associative learning model that can integrate facial images with speech signals to target a subject in a reinforcement learning (RL) paradigm. Through this approach, the rules of learning will involve associating paired stimuli (stimulusโ€“stimulus, i.e., faceโ€“speech), which is also known as predictor-choice pairs. Prior to a learning simulation, we extract the features of the biometrics used in the study. For facial features, we experiment by using two approaches: principal component analysis (PCA)-based Eigenfaces and singular value decomposition (SVD). For speech features, we use wavelet packet decomposition (WPD). The experiments show that the PCA-based Eigenfaces feature extraction approach produces better results than SVD. We implement the proposed learning model by using the Spike- Timing-Dependent Plasticity (STDP) algorithm, which depends on the time and rate of pre-post synaptic spikes. The key contribution of our study is the implementation of learning rules via STDP and firing rate in spatiotemporal neural networks based on the Izhikevich spiking model. In our learning, we implement learning for response group association by following the reward-modulated STDP in terms of RL, wherein the firing rate of the response groups determines the reward that will be given. We perform a number of experiments that use existing face samples from the Olivetti Research Laboratory (ORL) dataset, and speech samples from TIDigits. After several experiments and simulations are performed to recognize a subject, the results show that the proposed learning model can associate the predictor (face) with the choice (speech) at optimum performance rates of 77.26% and 82.66% for training and testing, respectively. We also perform learning by using real data, that is, an experiment is conducted on a sample of faceโ€“speech data, which have been collected in a manner similar to that of the initial data. The performance results are 79.11% and 77.33% for training and testing, respectively. Based on these results, the proposed learning model can produce high learning performance in terms of combining heterogeneous data (faceโ€“speech). This finding opens possibilities to expand RL in the field of biometric authenticatio
    • โ€ฆ
    corecore