410 research outputs found

    How much of the Hippocampus can be Explained by Functional Constraints?

    Get PDF
    In the spirit of Marr, we discuss an information-theoretic approach that derives, from the role of the hippocampus in memory, constraints on its anatomical and physiological structure. The observed structure is consistent with such constraints, and, further, we relate the quantitative arguments developed in earlier analytical studies to experimental measures extracted from neuronal recordings in the behaving rat

    Course 13 Of the evolution of the brain

    Get PDF

    Hardware Architectures and Implementations for Associative Memories : the Building Blocks of Hierarchically Distributed Memories

    Get PDF
    During the past several decades, the semiconductor industry has grown into a global industry with revenues around $300 billion. Intel no longer relies on only transistor scaling for higher CPU performance, but instead, focuses more on multiple cores on a single die. It has been projected that in 2016 most CMOS circuits will be manufactured with 22 nm process. The CMOS circuits will have a large number of defects. Especially when the transistor goes below sub-micron, the original deterministic circuits will start having probabilistic characteristics. Hence, it would be challenging to map traditional computational models onto probabilistic circuits, suggesting a need for fault-tolerant computational algorithms. Biologically inspired algorithms, or associative memories (AMs)—the building blocks of cortical hierarchically distributed memories (HDMs) discussed in this dissertation, exhibit a remarkable match to the nano-scale electronics, besides having great fault-tolerance ability. Research on the potential mapping of the HDM onto CMOL (hybrid CMOS/nanoelectronic circuits) nanogrids provides useful insight into the development of non-von Neumann neuromorphic architectures and semiconductor industry. In this dissertation, we investigated the implementations of AMs on different hardware platforms, including microprocessor based personal computer (PC), PC cluster, field programmable gate arrays (FPGA), CMOS, and CMOL nanogrids. We studied two types of neural associative memory models, with and without temporal information. In this research, we first decomposed the computational models into basic and common operations, such as matrix-vector inner-product and k-winners-take-all (k-WTA). We then analyzed the baseline performance/price ratio of implementing the AMs with a PC. We continued with a similar performance/price analysis of the implementations on more parallel hardware platforms, such as PC cluster and FPGA. However, the majority of the research emphasized on the implementations with all digital and mixed-signal full-custom CMOS and CMOL nanogrids. In this dissertation, we draw the conclusion that the mixed-signal CMOL nanogrids exhibit the best performance/price ratio over other hardware platforms. We also highlighted some of the trade-offs between dedicated and virtualized hardware circuits for the HDM models. A simple time-multiplexing scheme for the digital CMOS implementations can achieve comparable throughput as the mixed-signal CMOL nanogrids

    Attractors, memory and perception

    Get PDF
    In this Thesis, the first three introductory chapters are devoted to the review of literature on contextual perception, its neural basis and network modeling of memory. In chapter 4, the first two sections give the definition of our model; and the next two sections, 4.3 and 4.4, report the original work of mine on retrieval properties of different network structures and network dynamics underlying the response to ambiguous patterns, respectively. The reported work in chapter 5 has been done in collaboration with Prof Bharathi Jagadeesh in University of Washington, and is already published in the journal \u201dCerebral Cortex\u201d. In this collaboration, Yan Liu, from the group in Seattle, carried out the recording experiments and I did the data analysis and network simulations. Chapter 6, which represents a network model for \u201dpriming\u201d and \u201dadaptation aftereffect\u201d is done by me. The works reported in 4.3, 4.5, and the whole chapter 6 are in preparation for publication

    The Role of Synaptic Tagging and Capture for Memory Dynamics in Spiking Neural Networks

    Get PDF
    Memory serves to process and store information about experiences such that this information can be used in future situations. The transfer from transient storage into long-term memory, which retains information for hours, days, and even years, is called consolidation. In brains, information is primarily stored via alteration of synapses, so-called synaptic plasticity. While these changes are at first in a transient early phase, they can be transferred to a late phase, meaning that they become stabilized over the course of several hours. This stabilization has been explained by so-called synaptic tagging and capture (STC) mechanisms. To store and recall memory representations, emergent dynamics arise from the synaptic structure of recurrent networks of neurons. This happens through so-called cell assemblies, which feature particularly strong synapses. It has been proposed that the stabilization of such cell assemblies by STC corresponds to so-called synaptic consolidation, which is observed in humans and other animals in the first hours after acquiring a new memory. The exact connection between the physiological mechanisms of STC and memory consolidation remains, however, unclear. It is equally unknown which influence STC mechanisms exert on further cognitive functions that guide behavior. On timescales of minutes to hours (that means, the timescales of STC) such functions include memory improvement, modification of memories, interference and enhancement of similar memories, and transient priming of certain memories. Thus, diverse memory dynamics may be linked to STC, which can be investigated by employing theoretical methods based on experimental data from the neuronal and the behavioral level. In this thesis, we present a theoretical model of STC-based memory consolidation in recurrent networks of spiking neurons, which are particularly suited to reproduce biologically realistic dynamics. Furthermore, we combine the STC mechanisms with calcium dynamics, which have been found to guide the major processes of early-phase synaptic plasticity in vivo. In three included research articles as well as additional sections, we develop this model and investigate how it can account for a variety of behavioral effects. We find that the model enables the robust implementation of the cognitive memory functions mentioned above. The main steps to this are: 1. demonstrating the formation, consolidation, and improvement of memories represented by cell assemblies, 2. showing that neuromodulator-dependent STC can retroactively control whether information is stored in a temporal or rate-based neural code, and 3. examining interaction of multiple cell assemblies with transient and attractor dynamics in different organizational paradigms. In summary, we demonstrate several ways by which STC controls the late-phase synaptic structure of cell assemblies. Linking these structures to functional dynamics, we show that our STC-based model implements functionality that can be related to long-term memory. Thereby, we provide a basis for the mechanistic explanation of various neuropsychological effects.2021-09-0

    AI of Brain and Cognitive Sciences: From the Perspective of First Principles

    Full text link
    Nowadays, we have witnessed the great success of AI in various applications, including image classification, game playing, protein structure analysis, language translation, and content generation. Despite these powerful applications, there are still many tasks in our daily life that are rather simple to humans but pose great challenges to AI. These include image and language understanding, few-shot learning, abstract concepts, and low-energy cost computing. Thus, learning from the brain is still a promising way that can shed light on the development of next-generation AI. The brain is arguably the only known intelligent machine in the universe, which is the product of evolution for animals surviving in the natural environment. At the behavior level, psychology and cognitive sciences have demonstrated that human and animal brains can execute very intelligent high-level cognitive functions. At the structure level, cognitive and computational neurosciences have unveiled that the brain has extremely complicated but elegant network forms to support its functions. Over years, people are gathering knowledge about the structure and functions of the brain, and this process is accelerating recently along with the initiation of giant brain projects worldwide. Here, we argue that the general principles of brain functions are the most valuable things to inspire the development of AI. These general principles are the standard rules of the brain extracting, representing, manipulating, and retrieving information, and here we call them the first principles of the brain. This paper collects six such first principles. They are attractor network, criticality, random network, sparse coding, relational memory, and perceptual learning. On each topic, we review its biological background, fundamental property, potential application to AI, and future development.Comment: 59 pages, 5 figures, review articl

    Comprehensive review:Computational modelling of Schizophrenia

    Get PDF
    Computational modelling has been used to address: (1) the variety of symptoms observed in schizophrenia using abstract models of behavior (e.g. Bayesian models - top-down descriptive models of psychopathology); (2) the causes of these symptoms using biologically realistic models involving abnormal neuromodulation and/or receptor imbalance (e.g. connectionist and neural networks - bottom-up realistic models of neural processes). These different levels of analysis have been used to answer different questions (i.e. understanding behavioral vs. neurobiological anomalies) about the nature of the disorder. As such, these computational studies have mostly supported diverging hypotheses of schizophrenia's pathophysiology, resulting in a literature that is not always expanding coherently. Some of these hypotheses are however ripe for revision using novel empirical evidence.Here we present a review that first synthesizes the literature of computational modelling for schizophrenia and psychotic symptoms into categories supporting the dopamine, glutamate, GABA, dysconnection and Bayesian inference hypotheses respectively. Secondly, we compare model predictions against the accumulated empirical evidence and finally we identify specific hypotheses that have been left relatively under-investigated

    Modeling multiple object scenarios for feature recognition and classification using cellular neural networks

    Get PDF
    Cellular neural networks (CNNs) have been adopted in the spatio-temporal processing research field as a paradigm of complexity. This is due to the ease of designs for complex spatio-temporal tasks introduced by these networks. This has led to an increase in the adoption of CNNs for on-chip VLSI implementations. This dissertation proposes the use of a Cellular Neural Network to model, detect and classify objects appearing in multiple object scenes. The algorithm proposed is based on image scene enhancement through anisotropic diffusion; object detection and extraction through binary edge detection and boundary tracing; and object classification through genetically optimised associative networks and texture histograms. The first classification method is based on optimizing the space-invariant feedback template of the zero-input network through genetic operators, while the second method is based on computing diffusion filtered and modified histograms for object classes to generate decision boundaries that can be used to classify the objects. The primary goal is to design analogic algorithms that can be used to perform these tasks. While the use of genetically optimized associative networks for object learning yield an efficiency of over 95%, the use texture histograms has been found very accurate though there is a need to develop a better technique for histogram comparisons. The results found using these analogic algorithms affirm CNNs as well-suited for image processing tasks

    Dopaminergic Modulation Shapes Sensorimotor Processing in the Drosophila Mushroom Body

    Get PDF
    To survive in a complex and dynamic environment, animals must adapt their behavior based on their current needs and prior experiences. This flexibility is often mediated by neuromodulation within neural circuits that link sensory representations to alternative behavioral responses depending on contextual cues and learned associations. In Drosophila, the mushroom body is a prominent neural structure essential for olfactory learning. Dopaminergic neurons convey salient information about reward and punishment to the mushroom body in order to adjust synaptic connectivity between Kenyon cells, the neurons representing olfactory stimuli, and the mushroom body output neurons that ultimately influence behavior. However, we still lack a mechanistic understanding of how the dopaminergic neurons represent the moment-tomoment experience of a fly and drive changes in this sensory-to-motor transformation. Furthermore, very little is known about how the output neuron pathways lead to the execution of appropriate odor-related behaviors. We took advantage of the mushroom body’s modular circuit organization to investigate how the dopaminergic neuron population encodes different contextual cues. In vivo functional imaging of the dopaminergic neurons reveals that they represent both external reinforcement stimuli, like sugar rewards or punitive electric shock, as well as the fly’s motor state, through coordinated and partially antagonistic activity patterns across the population. This multiplexing of motor and reward signals by the dopaminergic neurons parallels the dual roles of dopaminergic inputs to the vertebrate basal ganglia, thus demonstrating a conserved link between these distantly related neural circuits. We proceed to demonstrate that this dopaminergic signal in the mushroom body modifies neurotransmission with synaptic specificity and temporal precision to coordinately regulate the propagation of sensory signals through the output neurons. To explore how these output pathways ultimately influence olfactory navigation we have developed a closed loop olfactory paradigm in which we can monitor and manipulate the mushroom body output neurons as a fly navigates in a virtual olfactory environment. We have begun to probe the mushroom body circuitry in the context of olfactory navigation. These preliminary investigations have led to the identification of putative pathways for linking mushroom body output with the circuits that implement odor-tracking behavior and the characterization of the complex sensorimotor representations in the dopaminergic network. Our work reveals that the Drosophila dopaminergic system modulates mushroom body output at both acute and enduring timescales to guide immediate behaviors and learned responses
    corecore