59 research outputs found

    Improving Associative Memory in a Network of Spiking Neurons

    Get PDF
    In this thesis we use computational neural network models to examine the dynamics and functionality of the CA3 region of the mammalian hippocampus. The emphasis of the project is to investigate how the dynamic control structures provided by inhibitory circuitry and cellular modification may effect the CA3 region during the recall of previously stored information. The CA3 region is commonly thought to work as a recurrent auto-associative neural network due to the neurophysiological characteristics found, such as, recurrent collaterals, strong and sparse synapses from external inputs and plasticity between coactive cells. Associative memory models have been developed using various configurations of mathematical artificial neural networks which were first developed over 40 years ago. Within these models we can store information via changes in the strength of connections between simplified model neurons (two-state). These memories can be recalled when a cue (noisy or partial) is instantiated upon the net. The type of information they can store is quite limited due to restrictions caused by the simplicity of the hard-limiting nodes which are commonly associated with a binary activation threshold. We build a much more biologically plausible model with complex spiking cell models and with realistic synaptic properties between cells. This model is based upon some of the many details we now know of the neuronal circuitry of the CA3 region. We implemented the model in computer software using Neuron and Matlab and tested it by running simulations of storage and recall in the network. By building this model we gain new insights into how different types of neurons, and the complex circuits they form, actually work. The mammalian brain consists of complex resistive-capacative electrical circuitry which is formed by the interconnection of large numbers of neurons. A principal cell type is the pyramidal cell within the cortex, which is the main information processor in our neural networks. Pyramidal cells are surrounded by diverse populations of interneurons which have proportionally smaller numbers compared to the pyramidal cells and these form connections with pyramidal cells and other inhibitory cells. By building detailed computational models of recurrent neural circuitry we explore how these microcircuits of interneurons control the flow of information through pyramidal cells and regulate the efficacy of the network. We also explore the effect of cellular modification due to neuronal activity and the effect of incorporating spatially dependent connectivity on the network during recall of previously stored information. In particular we implement a spiking neural network proposed by Sommer and Wennekers (2001). We consider methods for improving associative memory recall using methods inspired by the work by Graham and Willshaw (1995) where they apply mathematical transforms to an artificial neural network to improve the recall quality within the network. The networks tested contain either 100 or 1000 pyramidal cells with 10% connectivity applied and a partial cue instantiated, and with a global pseudo-inhibition.We investigate three methods. Firstly, applying localised disynaptic inhibition which will proportionalise the excitatory post synaptic potentials and provide a fast acting reversal potential which should help to reduce the variability in signal propagation between cells and provide further inhibition to help synchronise the network activity. Secondly, implementing a persistent sodium channel to the cell body which will act to non-linearise the activation threshold where after a given membrane potential the amplitude of the excitatory postsynaptic potential (EPSP) is boosted to push cells which receive slightly more excitation (most likely high units) over the firing threshold. Finally, implementing spatial characteristics of the dendritic tree will allow a greater probability of a modified synapse existing after 10% random connectivity has been applied throughout the network. We apply spatial characteristics by scaling the conductance weights of excitatory synapses which simulate the loss in potential in synapses found in the outer dendritic regions due to increased resistance. To further increase the biological plausibility of the network we remove the pseudo-inhibition and apply realistic basket cell models with differing configurations for a global inhibitory circuit. The networks are configured with; 1 single basket cell providing feedback inhibition, 10% basket cells providing feedback inhibition where 10 pyramidal cells connect to each basket cell and finally, 100% basket cells providing feedback inhibition. These networks are compared and contrasted for efficacy on recall quality and the effect on the network behaviour. We have found promising results from applying biologically plausible recall strategies and network configurations which suggests the role of inhibition and cellular dynamics are pivotal in learning and memory

    Storage, recall, and novelty detection of sequences by the hippocampus: Elaborating on the SOCRATIC model to account for normal and aberrant effects of dopamine

    Get PDF
    ABSTRACT: In order to understand how the molecular or cellular defects that underlie a disease of the nervous system lead to the observ-able symptoms, it is necessary to develop a large-scale neural model. Such a model must specify how specific molecular processes contribute to neuronal function, how neurons contribute to network function, and how networks interact to produce behavior. This is a challenging undertaking, but some limited progress has been made in understanding the memory functions of the hippocampus with this degree of detail. There is increas-ing evidence that the hippocampus has a special role in the learning of sequences and the linkage of specific memories to context. In the first part of this paper, we review a model (the SOCRATIC model) that describes how the dentate and CA3 hippocampal regions could store and recall memory sequences in context. A major line of evidence for sequence recall is the “phase precession ” of hippocampal place cells. In the second part of the paper, we review the evidence for theta-gamma phase coding

    Latching dynamics as a basis for short-term recall

    Get PDF
    We discuss simple models for the transient storage in short-term memory of cortical patterns of activity, all based on the notion that their recall exploits the natural tendency of the cortex to hop from state to state—latching dynamics. We show that in one such model, and in simple spatial memory tasks we have given to human subjects, short-term memory can be limited to similar low capacity by interference effects, in tasks terminated by errors, and can exhibit similar sublinear scaling, when errors are overlooked. The same mechanism can drive serial recall if combined with weak order-encoding plasticity. Finally, even when storing randomly correlated patterns of activity the network demonstrates correlation-driven latching waves, which are reflected at the outer extremes of pattern space

    Professional or amateur? The phonological output buffer as a working memory operator

    Get PDF
    The Phonological Output Buffer (POB) is thought to be the stage in language production where phonemes are held in working memory and assembled into words. The neural implementation of the POB remains unclear despite a wealth of phenomenological data. Individuals with POB impairment make phonological errors when they produce words and non-words, including phoneme omissions, insertions, transpositions, substitutions and perseverations. Errors can apply to different kinds and sizes of units, such as phonemes, number words, morphological affixes, and function words, and evidence from POB impairments suggests that units tend to substituted with units of the same kind-e.g., numbers with numbers and whole morphological affixes with other affixes. This suggests that different units are processed and stored in the POB in the same stage, but perhaps separately in different mini-stores. Further, similar impairments can affect the buffer used to produce Sign Language, which raises the question of whether it is instantiated in a distinct device with the same design. However, what appear as separate buffers may be distinct regions in the activity space of a single extended POB network, connected with a lexicon network. The self-consistency of this idea can be assessed by studying an autoassociative Potts network, as a model of memory storage distributed over several cortical areas, and testing whether the network can represent both units of word and signs, reflecting the types and patterns of errors made by individuals with POB impairment

    Хопфилдовские ансамбли в латеральных нейроструктурах коры мозга

    Get PDF
    Предложена модель механизма ассоциативной памяти как множества локальных групп нейронов, образующих слои коры мозга, названных хопфилдовскими ансамблями. Ансамбль содержит до 3 тыс. клеток и может помнить сотни паттернов нервной активности. Множество перекрывающихся ансамблей отвечает модели разреженной сети Хопфилда. Рассмотрены новые результаты исследований этой модели, позволяющие интерпретировать ряд свойств хопфилдовских нейронных ансамблей. Описаны эксперименты по полному восстановлению памяти после удаления части связей нейронной сети и повторного запоминания отдельных образов, воспроизводящие эффект излечения амнезии путем напоминания больному забытых образов.Запропоновано модель механізму асоціативної пам’яті як множини локальних груп нейронів, що утворюють шари кори мозку, які названі хопфілдовськими ансамблями. Ансамбль містить до 3 тис. клітин і може пам’ятати сотні патернів нервової активності. Ансамблі частково перетинаються, і їх множина відповідає моделі розрідженої мережі Хопфілда. Розглянуто результати останніх досліджень цієї моделі, які дозволяють інтерпретувати деякі властивості хопфілдовських нейронних ансамблів. Наведено результати експерименту щодо повного відновлення пам’яті після видалення частини зв’язків мережі та повторного навчання на деяких забутих образах, який відтворює ефект лікування амнезії шляхом нагадування хворому деяких забутих образів минулого.It is proposed the model of mechanism of associative memory as set of local groupsof neurons, creating layers of cerebral cortex, named by Hopfield’s ensembles. Ensemble consists of near 3 thousands of cells and can remember hundreds of partners of nervous activity. Set of overlaping ensembles answer the model of the rare Hopfield net. The new results of researches of this model, allowing interpretation of a number characteristics of neuron ensembles. The are discribed experiments of memory full restoration after extracting the part of connections of the neuron net and repeated remembering independent images, reproducing the effect of recovery of amnesia by remembering the sickman of the forgot images

    Memristive Computing

    Get PDF
    Memristive computing refers to the utilization of the memristor, the fourth fundamental passive circuit element, in computational tasks. The existence of the memristor was theoretically predicted in 1971 by Leon O. Chua, but experimentally validated only in 2008 by HP Labs. A memristor is essentially a nonvolatile nanoscale programmable resistor — indeed, memory resistor — whose resistance, or memristance to be precise, is changed by applying a voltage across, or current through, the device. Memristive computing is a new area of research, and many of its fundamental questions still remain open. For example, it is yet unclear which applications would benefit the most from the inherent nonlinear dynamics of memristors. In any case, these dynamics should be exploited to allow memristors to perform computation in a natural way instead of attempting to emulate existing technologies such as CMOS logic. Examples of such methods of computation presented in this thesis are memristive stateful logic operations, memristive multiplication based on the translinear principle, and the exploitation of nonlinear dynamics to construct chaotic memristive circuits. This thesis considers memristive computing at various levels of abstraction. The first part of the thesis analyses the physical properties and the current-voltage behaviour of a single device. The middle part presents memristor programming methods, and describes microcircuits for logic and analog operations. The final chapters discuss memristive computing in largescale applications. In particular, cellular neural networks, and associative memory architectures are proposed as applications that significantly benefit from memristive implementation. The work presents several new results on memristor modeling and programming, memristive logic, analog arithmetic operations on memristors, and applications of memristors. The main conclusion of this thesis is that memristive computing will be advantageous in large-scale, highly parallel mixed-mode processing architectures. This can be justified by the following two arguments. First, since processing can be performed directly within memristive memory architectures, the required circuitry, processing time, and possibly also power consumption can be reduced compared to a conventional CMOS implementation. Second, intrachip communication can be naturally implemented by a memristive crossbar structure.Siirretty Doriast

    Méthodes géométriques pour la mémoire et l'apprentissage

    Get PDF
    This thesis is devoted to geometric methods in optimization, learning and neural networks. In many problems of (supervised and unsupervised) learning, pattern recognition, and clustering there is a need to take into account the internal (intrinsic) structure of the underlying space, which is not necessary Euclidean. For Riemannian manifolds we construct computational algorithms for Newton method, conjugate-gradient methods, and some non-smooth optimization methods like the r-algorithm. For this purpose we develop methods for geodesic calculation in submanifolds based on Hamilton equations and symplectic integration. Then we construct a new type of neural associative memory capable of unsupervised learning and clustering. Its learning is based on generalized averaging over Grassmann manifolds. Further extension of this memory involves implicit space transformation and kernel machines. Also we consider geometric algorithms for signal processing and adaptive filtering. Proposed methods are tested for academic examples as well as real-life problems of image recognition and signal processing. Application of proposed neural networks is demonstrated for a complete real-life project of chemical image recognition (electronic nose).Cette these est consacree aux methodes geometriques dans l'optimisation, l'apprentissage et les reseaux neuronaux. Dans beaucoup de problemes de l'apprentissage (supervises et non supervises), de la reconnaissance des formes, et du groupage, il y a un besoin de tenir en compte de la structure interne (intrinseque) de l'espace fondamental, qui n'est pas toujours euclidien. Pour les varietes Riemanniennes nous construisons des algorithmes pour la methode de Newton, les methodes de gradients conjugues, et certaines methodes non-lisses d'optimisation comme r-algorithme. A cette fin nous developpons des methodes pour le calcul des geodesiques dans les sous-varietes bases sur des equations de Hamilton et l'integration symplectique. Apres nous construisons un nouveau type avec de la memoire associative neuronale capable de l'apprentissage non supervise et du groupage (clustering). Son apprentissage est base sur moyennage generalise dans les varietes de Grassmann. Future extension de cette memoire implique les machines a noyaux et transformations de l'espace implicites. Aussi nous considerons des algorithmes geometriques pour le traitement des signaux et le filtrage adaptatif. Les methodes proposees sont testees avec des exemples standard et avec des problemes reels de reconnaissance des images et du traitement des signaux. L'application des reseaux neurologiques proposes est demontree pour un projet reel complet de la reconnaissance des images chimiques (nez electronique)
    corecore