2,546 research outputs found

    How connectivity rules and synaptic properties shape the efficacy of pattern separation in the entorhinal cortex–dentate gyrus–CA3 network

    Get PDF
    Pattern separation is a fundamental brain computation that converts small differences in input patterns into large differences in output patterns. Several synaptic mechanisms of pattern separation have been proposed, including code expansion, inhibition and plasticity; however, which of these mechanisms play a role in the entorhinal cortex (EC)–dentate gyrus (DG)–CA3 circuit, a classical pattern separation circuit, remains unclear. Here we show that a biologically realistic, full-scale EC–DG–CA3 circuit model, including granule cells (GCs) and parvalbumin-positive inhibitory interneurons (PV+-INs) in the DG, is an efficient pattern separator. Both external gamma-modulated inhibition and internal lateral inhibition mediated by PV+-INs substantially contributed to pattern separation. Both local connectivity and fast signaling at GC–PV+-IN synapses were important for maximum effectiveness. Similarly, mossy fiber synapses with conditional detonator properties contributed to pattern separation. By contrast, perforant path synapses with Hebbian synaptic plasticity and direct EC–CA3 connection shifted the network towards pattern completion. Our results demonstrate that the specific properties of cells and synapses optimize higher-order computations in biological networks and might be useful to improve the deep learning capabilities of technical networks

    Improving Associative Memory in a Network of Spiking Neurons

    Get PDF
    In this thesis we use computational neural network models to examine the dynamics and functionality of the CA3 region of the mammalian hippocampus. The emphasis of the project is to investigate how the dynamic control structures provided by inhibitory circuitry and cellular modification may effect the CA3 region during the recall of previously stored information. The CA3 region is commonly thought to work as a recurrent auto-associative neural network due to the neurophysiological characteristics found, such as, recurrent collaterals, strong and sparse synapses from external inputs and plasticity between coactive cells. Associative memory models have been developed using various configurations of mathematical artificial neural networks which were first developed over 40 years ago. Within these models we can store information via changes in the strength of connections between simplified model neurons (two-state). These memories can be recalled when a cue (noisy or partial) is instantiated upon the net. The type of information they can store is quite limited due to restrictions caused by the simplicity of the hard-limiting nodes which are commonly associated with a binary activation threshold. We build a much more biologically plausible model with complex spiking cell models and with realistic synaptic properties between cells. This model is based upon some of the many details we now know of the neuronal circuitry of the CA3 region. We implemented the model in computer software using Neuron and Matlab and tested it by running simulations of storage and recall in the network. By building this model we gain new insights into how different types of neurons, and the complex circuits they form, actually work. The mammalian brain consists of complex resistive-capacative electrical circuitry which is formed by the interconnection of large numbers of neurons. A principal cell type is the pyramidal cell within the cortex, which is the main information processor in our neural networks. Pyramidal cells are surrounded by diverse populations of interneurons which have proportionally smaller numbers compared to the pyramidal cells and these form connections with pyramidal cells and other inhibitory cells. By building detailed computational models of recurrent neural circuitry we explore how these microcircuits of interneurons control the flow of information through pyramidal cells and regulate the efficacy of the network. We also explore the effect of cellular modification due to neuronal activity and the effect of incorporating spatially dependent connectivity on the network during recall of previously stored information. In particular we implement a spiking neural network proposed by Sommer and Wennekers (2001). We consider methods for improving associative memory recall using methods inspired by the work by Graham and Willshaw (1995) where they apply mathematical transforms to an artificial neural network to improve the recall quality within the network. The networks tested contain either 100 or 1000 pyramidal cells with 10% connectivity applied and a partial cue instantiated, and with a global pseudo-inhibition.We investigate three methods. Firstly, applying localised disynaptic inhibition which will proportionalise the excitatory post synaptic potentials and provide a fast acting reversal potential which should help to reduce the variability in signal propagation between cells and provide further inhibition to help synchronise the network activity. Secondly, implementing a persistent sodium channel to the cell body which will act to non-linearise the activation threshold where after a given membrane potential the amplitude of the excitatory postsynaptic potential (EPSP) is boosted to push cells which receive slightly more excitation (most likely high units) over the firing threshold. Finally, implementing spatial characteristics of the dendritic tree will allow a greater probability of a modified synapse existing after 10% random connectivity has been applied throughout the network. We apply spatial characteristics by scaling the conductance weights of excitatory synapses which simulate the loss in potential in synapses found in the outer dendritic regions due to increased resistance. To further increase the biological plausibility of the network we remove the pseudo-inhibition and apply realistic basket cell models with differing configurations for a global inhibitory circuit. The networks are configured with; 1 single basket cell providing feedback inhibition, 10% basket cells providing feedback inhibition where 10 pyramidal cells connect to each basket cell and finally, 100% basket cells providing feedback inhibition. These networks are compared and contrasted for efficacy on recall quality and the effect on the network behaviour. We have found promising results from applying biologically plausible recall strategies and network configurations which suggests the role of inhibition and cellular dynamics are pivotal in learning and memory

    Biophysical modeling to reverse engineer two mammalian neural circuits lower urinar Y tract and hippocampus

    Get PDF
    Computational neuroscience provides tools to abstract and generalize principles of neuronal function using mathematics and computers. This dissertation reports biophysical modeling approaches to facilitate reverse engineering of two mammalian neural circuits - the lower urinary tract for the development of stimulation techniques, and the rodent hippocampus to understand mechanisms involved in theta rhythms. The LUT in mammals consists of the urinary bladder, external urethral sphincter (EUS) and the urethra. Control of the LUT is achieved via a neural circuit which integrates distinct components. Dysfunctions of the lower urinary tract (LUT) are caused by a variety of factors including spinal cord injury and diabetes. Our model builds on previous models by using biologically realistic spiking neurons to reproduce neural control of the LUT in both normal function and dysfunction cases. The hippocampus has long been implicated in memory storage and retrieval. Also, hippocampal theta oscillations (4-12 Hz) are consistently recorded during memory tasks and spatial navigation. Previous model revealed five distinct theta generators. The present study extends the work by probing deeper into the intrinsic theta mechanisms via characterizing the mechanisms as being resonant, i.e., inherently produce theta, or synchronizing, i.e., promote coordinated activity, or possibly both. The role of the neuromodulatory state is also investigated.Includes bibliographical references (pages 157-164)

    Modeling the hippocampus : finely controlled memory storage using spiking neurons

    Get PDF
    The hippocampus, an area in the temporal lobe of the mammalian brain, participates in the storage of personal memories and life events. As such traumatic memories and the consequent symptoms of post-traumatic stress are thought to be stored or at least processedin the hippocampus. While a fundamental understanding of a traumatic memory is still elusive, studying the physiology and functional properties of the hippocampus are anessential first step. Towards that goal, I developed a detailed computational model of the hippocampus. The model included the important effects of the neuromodulator Acetylcholine that switches the hippocampal network between the memory encoding state and the memory retrieval state. In the first study, I examined the mechanisms for controlling runaway excitation in the model. The results indicated different mechanisms for controlling runaway excitation in the memory encoding state as opposed to the memory retrieval state of the circuit. These findings produced the first functionally-based categorization of seizures in animals and humans, and may inspire specific treatments for these types of seizures. The second study examined the underpinnings of the rhythmic activity of the hippocampus. These oscillations in the theta range (4-12 Hz) are theorize to play a major role in the memory functions and in processing sequences of events and actions in both place and time. We found the generation of theta rhythmic activity to be best described as a product of multiple interacting generators. Importantly, we found differences in theta generation depending on the functional state of the hippocampus. Finally, the third study detailed the rules of the complex interactions between these multiple theta generators in the circuit. Our results shed more light on the role of specific components in the hippocampal circuit to maintain its function in both health and disease states

    The Role of Synaptic Tagging and Capture for Memory Dynamics in Spiking Neural Networks

    Get PDF
    Memory serves to process and store information about experiences such that this information can be used in future situations. The transfer from transient storage into long-term memory, which retains information for hours, days, and even years, is called consolidation. In brains, information is primarily stored via alteration of synapses, so-called synaptic plasticity. While these changes are at first in a transient early phase, they can be transferred to a late phase, meaning that they become stabilized over the course of several hours. This stabilization has been explained by so-called synaptic tagging and capture (STC) mechanisms. To store and recall memory representations, emergent dynamics arise from the synaptic structure of recurrent networks of neurons. This happens through so-called cell assemblies, which feature particularly strong synapses. It has been proposed that the stabilization of such cell assemblies by STC corresponds to so-called synaptic consolidation, which is observed in humans and other animals in the first hours after acquiring a new memory. The exact connection between the physiological mechanisms of STC and memory consolidation remains, however, unclear. It is equally unknown which influence STC mechanisms exert on further cognitive functions that guide behavior. On timescales of minutes to hours (that means, the timescales of STC) such functions include memory improvement, modification of memories, interference and enhancement of similar memories, and transient priming of certain memories. Thus, diverse memory dynamics may be linked to STC, which can be investigated by employing theoretical methods based on experimental data from the neuronal and the behavioral level. In this thesis, we present a theoretical model of STC-based memory consolidation in recurrent networks of spiking neurons, which are particularly suited to reproduce biologically realistic dynamics. Furthermore, we combine the STC mechanisms with calcium dynamics, which have been found to guide the major processes of early-phase synaptic plasticity in vivo. In three included research articles as well as additional sections, we develop this model and investigate how it can account for a variety of behavioral effects. We find that the model enables the robust implementation of the cognitive memory functions mentioned above. The main steps to this are: 1. demonstrating the formation, consolidation, and improvement of memories represented by cell assemblies, 2. showing that neuromodulator-dependent STC can retroactively control whether information is stored in a temporal or rate-based neural code, and 3. examining interaction of multiple cell assemblies with transient and attractor dynamics in different organizational paradigms. In summary, we demonstrate several ways by which STC controls the late-phase synaptic structure of cell assemblies. Linking these structures to functional dynamics, we show that our STC-based model implements functionality that can be related to long-term memory. Thereby, we provide a basis for the mechanistic explanation of various neuropsychological effects.2021-09-0

    Computational Models of the Amygdala in Acquisition and Extinction of Conditioned Fear

    Get PDF
    The amygdala plays a central role in both acquisition and expression of conditioned fear associations and dysregulation of the amygdala leads to fear and anxiety disorders such as posttraumatic stress disorder (PTSD). Computational modeling has served as an important tool to understand the cellular and circuit mechanisms of fear acquisition and extinction. This review provides a critical appraisal of existing computational modeling studies of the amygdala and extended circuitry in acquisition and extinction of learned fear associations. It gives a broad overview of the computational techniques applied to amygdala modeling with an emphasis on how computational models could shed light on the neural mechanisms of fear learning, inform experimental design, and lead to specific, experimentally testable hypotheses. It covers different types of published models including rule‐based models, connectionist type models, phenomenological spiking neuronal models, and detailed biophysical conductance‐based models. Specific attention is given to the evolution of amygdala models from simple rule‐based and connectionist type models to more sophisticated and biologically realistic models. Future direction on computational modeling of the amygdala and associated networks in emotional learning is also discussed

    A Review of Findings from Neuroscience and Cognitive Psychology as Possible Inspiration for the Path to Artificial General Intelligence

    Full text link
    This review aims to contribute to the quest for artificial general intelligence by examining neuroscience and cognitive psychology methods for potential inspiration. Despite the impressive advancements achieved by deep learning models in various domains, they still have shortcomings in abstract reasoning and causal understanding. Such capabilities should be ultimately integrated into artificial intelligence systems in order to surpass data-driven limitations and support decision making in a way more similar to human intelligence. This work is a vertical review that attempts a wide-ranging exploration of brain function, spanning from lower-level biological neurons, spiking neural networks, and neuronal ensembles to higher-level concepts such as brain anatomy, vector symbolic architectures, cognitive and categorization models, and cognitive architectures. The hope is that these concepts may offer insights for solutions in artificial general intelligence.Comment: 143 pages, 49 figures, 244 reference
    • 

    corecore