1,450 research outputs found

    Benchmarking Hebbian learning rules for associative memory

    Full text link
    Associative memory or content addressable memory is an important component function in computer science and information processing and is a key concept in cognitive and computational brain science. Many different neural network architectures and learning rules have been proposed to model associative memory of the brain while investigating key functions like pattern completion and rivalry, noise reduction, and storage capacity. A less investigated but important function is prototype extraction where the training set comprises pattern instances generated by distorting prototype patterns and the task of the trained network is to recall the correct prototype pattern given a new instance. In this paper we characterize these different aspects of associative memory performance and benchmark six different learning rules on storage capacity and prototype extraction. We consider only models with Hebbian plasticity that operate on sparse distributed representations with unit activities in the interval [0,1]. We evaluate both non-modular and modular network architectures and compare performance when trained and tested on different kinds of sparse random binary pattern sets, including correlated ones. We show that covariance learning has a robust but low storage capacity under these conditions and that the Bayesian Confidence Propagation learning rule (BCPNN) is superior with a good margin in all cases except one, reaching a three times higher composite score than the second best learning rule tested.Comment: 24 pages, 9 figure

    Type 3 adenylyl cyclase, neuronal primary cilia, and hippocampus-dependent memory formation

    Get PDF
    Primary cilia are microtubule-based cellular antennae present in most vertebrate cells including neurons. Neuronal primary cilia have abundant expression of G-protein coupled receptors (GPCRs) and downstream cAMP signaling components such as type 3 adenylyl cyclase (AC3). The deflects of neuronal cilia is associated with many memory-related disorders, such as intellectual disability. Thus far, little is known about how neuronal primary cilia regulate neuronal activity and affect hippocampal memory formation. Episodic memory is thought to be encoded by sparsely distributed memory-eligible neurons in the hippocampus and neocortex. However, it is not clear how memory-eligible neurons interact with one another to form and retrieve a memory. The objectives of my dissertation are to determine the roles of AC3 in regulating cortical protein phosphorylation, to examine the cellular mechanism of episodic memory formation, and to examine how neuronal primary cilia regulate trace fear memory formation. Project 1: Compare protein phosphorylation levels in the prefrontal cortex between AC3 knockout (KO) and wildtype (WT) mice. AC3 represents a key enzyme mediating ciliary cAMP signaling in neurons and is genetically associated with major depressive disorder (MDD) and autism spectrum disorders (ASD). The major downstream effector protein of cAMP in cells is protein kinase A (PKA), whose activation leads to the phosphorylation of numerous proteins to propagate the signaling downstream. In my mass spectrometry-based phosphoproteomic study using conditional AC3 KO mice, I identified thousands of peptides from prefrontal cortical tissues, some of which are differentially phosphorylated in AC3 WT and KO samples. In addition, this effort led to identification of over two hundred proteins, whose phosphorylation were sex-biased. Surprisingly, a high percentage of these targets (31%) are autism-associated proteins/genes. Hence, this study provides the first phosphoproteomic evidence suggesting that sex-biased protein phosphorylation may contribute to the sexual dimorphism of autism. Project 2: Investigate how hippocampal neurons are recruited to interact with each other to encode a trace fear memory. Using in vivo calcium imaging in freely behaving mice, I found that a small portion of highly active hippocampal neurons (termed primed neurons) are actively engaged in memory formation and retrieval. I found that induction of activity synchronization among primed neurons from random dynamics is critical for trace memory formation and retrieval. My work has provided direct in vivo evidence to challenge the long-held paradigm that activation and re-activation of memory cells encodes and retrieves memory, respectively. These findings support a new mechanistic model for associative memory formation, in that primed neurons connect with each other to forge a new circuit, bridging a conditional stimulus with an unconditional stimulus. Project 3: Develop an analytical method to identify primed neurons and determine the roles of neuronal primary cilia on hippocampal neuronal priming and trace memory formation. Neuronal primary cilia are “cellular antennae” which sense and transduce extracellular signals into neuronal soma. However, to date little is known about how neuronal primary cilia influence neuronal functions and hippocampal memory. I utilized conditional Ift88 knockout mice (to ablate cilia) as loss-of-function models. I found that inducible conditional Ift88 KOs display more severe learning deficits compared to their littermate controls. Cilia-ablated mice showed reduced overall neuronal activity, decreased number of primed neurons, and failed to form burst synchronization. These data support the conclusion that alteration of neuronal primary cilia impairs trace fear memory by decreasing hippocampal neuronal priming and the formation of burst synchronization. This study also provides evidence to support the importance of burst synchronization among primed neurons on memory formation and retrieval

    Hardware-Amenable Structural Learning for Spike-based Pattern Classification using a Simple Model of Active Dendrites

    Full text link
    This paper presents a spike-based model which employs neurons with functionally distinct dendritic compartments for classifying high dimensional binary patterns. The synaptic inputs arriving on each dendritic subunit are nonlinearly processed before being linearly integrated at the soma, giving the neuron a capacity to perform a large number of input-output mappings. The model utilizes sparse synaptic connectivity; where each synapse takes a binary value. The optimal connection pattern of a neuron is learned by using a simple hardware-friendly, margin enhancing learning algorithm inspired by the mechanism of structural plasticity in biological neurons. The learning algorithm groups correlated synaptic inputs on the same dendritic branch. Since the learning results in modified connection patterns, it can be incorporated into current event-based neuromorphic systems with little overhead. This work also presents a branch-specific spike-based version of this structural plasticity rule. The proposed model is evaluated on benchmark binary classification problems and its performance is compared against that achieved using Support Vector Machine (SVM) and Extreme Learning Machine (ELM) techniques. Our proposed method attains comparable performance while utilizing 10 to 50% less computational resources than the other reported techniques.Comment: Accepted for publication in Neural Computatio

    Hardware Architectures and Implementations for Associative Memories : the Building Blocks of Hierarchically Distributed Memories

    Get PDF
    During the past several decades, the semiconductor industry has grown into a global industry with revenues around $300 billion. Intel no longer relies on only transistor scaling for higher CPU performance, but instead, focuses more on multiple cores on a single die. It has been projected that in 2016 most CMOS circuits will be manufactured with 22 nm process. The CMOS circuits will have a large number of defects. Especially when the transistor goes below sub-micron, the original deterministic circuits will start having probabilistic characteristics. Hence, it would be challenging to map traditional computational models onto probabilistic circuits, suggesting a need for fault-tolerant computational algorithms. Biologically inspired algorithms, or associative memories (AMs)—the building blocks of cortical hierarchically distributed memories (HDMs) discussed in this dissertation, exhibit a remarkable match to the nano-scale electronics, besides having great fault-tolerance ability. Research on the potential mapping of the HDM onto CMOL (hybrid CMOS/nanoelectronic circuits) nanogrids provides useful insight into the development of non-von Neumann neuromorphic architectures and semiconductor industry. In this dissertation, we investigated the implementations of AMs on different hardware platforms, including microprocessor based personal computer (PC), PC cluster, field programmable gate arrays (FPGA), CMOS, and CMOL nanogrids. We studied two types of neural associative memory models, with and without temporal information. In this research, we first decomposed the computational models into basic and common operations, such as matrix-vector inner-product and k-winners-take-all (k-WTA). We then analyzed the baseline performance/price ratio of implementing the AMs with a PC. We continued with a similar performance/price analysis of the implementations on more parallel hardware platforms, such as PC cluster and FPGA. However, the majority of the research emphasized on the implementations with all digital and mixed-signal full-custom CMOS and CMOL nanogrids. In this dissertation, we draw the conclusion that the mixed-signal CMOL nanogrids exhibit the best performance/price ratio over other hardware platforms. We also highlighted some of the trade-offs between dedicated and virtualized hardware circuits for the HDM models. A simple time-multiplexing scheme for the digital CMOS implementations can achieve comparable throughput as the mixed-signal CMOL nanogrids
    • …
    corecore