62 research outputs found

    A Heteroassociative Learning Model Robust to Interference

    Get PDF
    Best Paper AwardInternational audienceNeuronal models of associative memories are recurrent networks able to learn quickly patterns as stable states of the network. Their main acknowledged weakness is related to catastrophic interference when too many or too close examples are stored. Based on biological data we have recently proposed a model resistant to some kinds of interferences related to heteroassociative learning. In this paper we report numerical experiments that highlight this robustness and demonstrate very good performances of memorization. We also discuss convergence of interests for such an adaptive mechanism for biological modeling and information processing in the domain of machine learning

    Latching dynamics as a basis for short-term recall

    Get PDF
    We discuss simple models for the transient storage in short-term memory of cortical patterns of activity, all based on the notion that their recall exploits the natural tendency of the cortex to hop from state to state—latching dynamics. We show that in one such model, and in simple spatial memory tasks we have given to human subjects, short-term memory can be limited to similar low capacity by interference effects, in tasks terminated by errors, and can exhibit similar sublinear scaling, when errors are overlooked. The same mechanism can drive serial recall if combined with weak order-encoding plasticity. Finally, even when storing randomly correlated patterns of activity the network demonstrates correlation-driven latching waves, which are reflected at the outer extremes of pattern space

    A Modular Network Architecture Resolving Memory Interference through Inhibition

    Get PDF
    International audienceIn real learning paradigms like pavlovian conditioning, several modes of learning are associated, including generalization from cues and integration of specific cases in context. Associative memories have been shown to be interesting neuronal models to learn quickly specific cases but they are hardly used in realistic applications because of their limited storage capacities resulting in interferences when too many examples are considered. Inspired by biological considerations, we propose a modular model of associative memory including mechanisms to manipulate properly multimodal inputs and to detect and manage interferences. This paper reports experiments that demonstrate the good behavior of the model in a wide series of simulations and discusses its impact both in machine learning and in biological modeling

    Memory processes in medial temporal lobe: experimental, theoretical and computational approaches

    Get PDF
    The medial temporal lobe (MTL) includes the hippocampus, amygdala and parahippocampal regions, and is crucial for episodic and spatial memory. MTL memory function consists of distinct processes such as encoding, consolidation and retrieval. Encoding is the process by which perceived information is transformed into a memory trace. After encoding, memory traces are stabilized by consolidation. Memory retrieval (recall) refers to the process by which memory traces are reactivated to access information previously encoded and stored in the brain. Although underlying neural mechanisms supporting these distinct functional stages remain largely unknown, recent studies have indicated that distinct oscillatory dynamics, specific neuron types, synaptic plasticity and neuromodulation, play a central role. The theta rhythm is believed to be crucial in the encoding and retrieval of memories. Experimental and computational studies indicate that precise timing of principal cell firing in the hippocampus, relative to the theta rhythm, underlies encoding and retrieval processes. On the other hand, sharp-wave ripples have been implicated in the consolidation through the “replay” of memories in compressed time scales. The neural circuits and cell types supporting memory processes in the MTL areas have only recently been delineated using experimental approaches such as optogenetics, juxtacellular recordings, and optical imaging. Principal (excitatory) cells are crucial for encoding, storing and retrieving memories at the cellular level, whereas inhibitory interneurons provide the temporal structures for orchestrating the activities of neuronal populations of principal cells by regulating synaptic integration and timing of action potential generation of principal cells as well as the generation and maintenance of network oscillations (rhythms). In addition, neuromodulators such as acetylcholine alter dynamical properties of neurons and synapses, and modulate oscillatory state and rules of synaptic plasticity and their levels might tune MTL to specific memory processes. The research topic offers a snapshot of the current state of-the-art on how memories are encoded, consolidated, stored and retrieved in MTL structures. Accepted papers to the research topic include studies (experimental or computational) focusing on the structure and function of neural circuits, their cellular components (principal cell and inhibitory interneurons) and their properties, synaptic plasticity rules involved in these memory processes, network oscillations such as theta, gamma and sharp-wave ripples, and the role of neuromodulators in health and in disease (Alzheimer's disease and schizophrenia)

    Improving Associative Memory in a Network of Spiking Neurons

    Get PDF
    In this thesis we use computational neural network models to examine the dynamics and functionality of the CA3 region of the mammalian hippocampus. The emphasis of the project is to investigate how the dynamic control structures provided by inhibitory circuitry and cellular modification may effect the CA3 region during the recall of previously stored information. The CA3 region is commonly thought to work as a recurrent auto-associative neural network due to the neurophysiological characteristics found, such as, recurrent collaterals, strong and sparse synapses from external inputs and plasticity between coactive cells. Associative memory models have been developed using various configurations of mathematical artificial neural networks which were first developed over 40 years ago. Within these models we can store information via changes in the strength of connections between simplified model neurons (two-state). These memories can be recalled when a cue (noisy or partial) is instantiated upon the net. The type of information they can store is quite limited due to restrictions caused by the simplicity of the hard-limiting nodes which are commonly associated with a binary activation threshold. We build a much more biologically plausible model with complex spiking cell models and with realistic synaptic properties between cells. This model is based upon some of the many details we now know of the neuronal circuitry of the CA3 region. We implemented the model in computer software using Neuron and Matlab and tested it by running simulations of storage and recall in the network. By building this model we gain new insights into how different types of neurons, and the complex circuits they form, actually work. The mammalian brain consists of complex resistive-capacative electrical circuitry which is formed by the interconnection of large numbers of neurons. A principal cell type is the pyramidal cell within the cortex, which is the main information processor in our neural networks. Pyramidal cells are surrounded by diverse populations of interneurons which have proportionally smaller numbers compared to the pyramidal cells and these form connections with pyramidal cells and other inhibitory cells. By building detailed computational models of recurrent neural circuitry we explore how these microcircuits of interneurons control the flow of information through pyramidal cells and regulate the efficacy of the network. We also explore the effect of cellular modification due to neuronal activity and the effect of incorporating spatially dependent connectivity on the network during recall of previously stored information. In particular we implement a spiking neural network proposed by Sommer and Wennekers (2001). We consider methods for improving associative memory recall using methods inspired by the work by Graham and Willshaw (1995) where they apply mathematical transforms to an artificial neural network to improve the recall quality within the network. The networks tested contain either 100 or 1000 pyramidal cells with 10% connectivity applied and a partial cue instantiated, and with a global pseudo-inhibition.We investigate three methods. Firstly, applying localised disynaptic inhibition which will proportionalise the excitatory post synaptic potentials and provide a fast acting reversal potential which should help to reduce the variability in signal propagation between cells and provide further inhibition to help synchronise the network activity. Secondly, implementing a persistent sodium channel to the cell body which will act to non-linearise the activation threshold where after a given membrane potential the amplitude of the excitatory postsynaptic potential (EPSP) is boosted to push cells which receive slightly more excitation (most likely high units) over the firing threshold. Finally, implementing spatial characteristics of the dendritic tree will allow a greater probability of a modified synapse existing after 10% random connectivity has been applied throughout the network. We apply spatial characteristics by scaling the conductance weights of excitatory synapses which simulate the loss in potential in synapses found in the outer dendritic regions due to increased resistance. To further increase the biological plausibility of the network we remove the pseudo-inhibition and apply realistic basket cell models with differing configurations for a global inhibitory circuit. The networks are configured with; 1 single basket cell providing feedback inhibition, 10% basket cells providing feedback inhibition where 10 pyramidal cells connect to each basket cell and finally, 100% basket cells providing feedback inhibition. These networks are compared and contrasted for efficacy on recall quality and the effect on the network behaviour. We have found promising results from applying biologically plausible recall strategies and network configurations which suggests the role of inhibition and cellular dynamics are pivotal in learning and memory

    Development of a Novel Technique for Predicting Tumor Response in Adaptive Radiation Therapy

    Get PDF
    This dissertation concentrates on the introduction of Predictive Adaptive Radiation Therapy (PART) as a potential method to improve cancer treatment. PART is a novel technique that utilizes volumetric image-guided radiation therapy treatment (IGRT) data to actively predict the tumor response to therapy and estimate clinical outcomes during the course of treatment. To implement PART, a patient database containing IGRT image data for 40 lesions obtained from patients who were imaged and treated with helical tomotherapy was constructed. The data was then modeled using locally weighted regression. This model predicts future tumor volumes and masses and the associated confidence intervals based on limited observations during the first two weeks of treatment. All predictions were made using only 8 days worth of observations from early in the treatment and were all bound by a 95% confidence interval. Since the predictions were accurate with quantified uncertainty, they could eventually be used to optimize and adapt treatment accordingly, hence the term PART (Predictive Adaptive Radiation Therapy). A challenge in implementing PART in a clinical setting is the increased quality assurance that it will demand. To help ease this burden, a technique was developed to automatically evaluate helical tomotherapy treatments during delivery using exit detector data. This technique uses an auto-associative kernel regression (AAKR) model to detect errors in tomotherapy delivery. This modeling scheme is especially suited for the problem of monitoring the fluence values found in the exit detector data because it is able to learn the complex detector data relationships. Several AAKR models were tested using tomotherapy detector data from deliveries that had intentionally inserted errors and different attenuations from the sinograms that were used to develop the model. The model proved to be robust and could predict the correct “error-free” values for a projection in which the opening time of a single MLC leaf had been decreased by 10%. The model also was able to determine machine output errors. The automation of this technique should significantly ease the QA burden that accompanies adaptive therapy, and will help to make the implementation of PART more feasible

    Storage, recall, and novelty detection of sequences by the hippocampus: Elaborating on the SOCRATIC model to account for normal and aberrant effects of dopamine

    Get PDF
    ABSTRACT: In order to understand how the molecular or cellular defects that underlie a disease of the nervous system lead to the observ-able symptoms, it is necessary to develop a large-scale neural model. Such a model must specify how specific molecular processes contribute to neuronal function, how neurons contribute to network function, and how networks interact to produce behavior. This is a challenging undertaking, but some limited progress has been made in understanding the memory functions of the hippocampus with this degree of detail. There is increas-ing evidence that the hippocampus has a special role in the learning of sequences and the linkage of specific memories to context. In the first part of this paper, we review a model (the SOCRATIC model) that describes how the dentate and CA3 hippocampal regions could store and recall memory sequences in context. A major line of evidence for sequence recall is the “phase precession ” of hippocampal place cells. In the second part of the paper, we review the evidence for theta-gamma phase coding

    Exploiting semantic information in a spiking neural SLAM system

    Get PDF
    To navigate in new environments, an animal must be able to keep track of its position while simultaneously creating and updating an internal map of features in the environment, a problem formulated as simultaneous localization and mapping (SLAM) in the field of robotics. This requires integrating information from different domains, including self-motion cues, sensory, and semantic information. Several specialized neuron classes have been identified in the mammalian brain as being involved in solving SLAM. While biology has inspired a whole class of SLAM algorithms, the use of semantic information has not been explored in such work. We present a novel, biologically plausible SLAM model called SSP-SLAM—a spiking neural network designed using tools for large scale cognitive modeling. Our model uses a vector representation of continuous spatial maps, which can be encoded via spiking neural activity and bound with other features (continuous and discrete) to create compressed structures containing semantic information from multiple domains (e.g., spatial, temporal, visual, conceptual). We demonstrate that the dynamics of these representations can be implemented with a hybrid oscillatory-interference and continuous attractor network of head direction cells. The estimated self-position from this network is used to learn an associative memory between semantically encoded landmarks and their positions, i.e., an environment map, which is used for loop closure. Our experiments demonstrate that environment maps can be learned accurately and their use greatly improves self-position estimation. Furthermore, grid cells, place cells, and object vector cells are observed by this model. We also run our path integrator network on the NengoLoihi neuromorphic emulator to demonstrate feasibility for a full neuromorphic implementation for energy efficient SLAM
    • …
    corecore