614 research outputs found

    Effects of homeostatic constraints on associative memory storage and synaptic connectivity of cortical circuits

    Get PDF
    The impact of learning and long-term memory storage on synaptic connectivity is not completely understood. In this study, we examine the effects of associative learning on synaptic connectivity in adult cortical circuits by hypothesizing that these circuits function in a steady-state, in which the memory capacity of a circuit is maximal and learning must be accompanied by forgetting. Steady-state circuits should be characterized by unique connectivity features. To uncover such features we developed a biologically constrained, exactly solvable model of associative memory storage. The model is applicable to networks of multiple excitatory and inhibitory neuron classes and can account for homeostatic constraints on the number and the overall weight of functional connections received by each neuron. The results show that in spite of a large number of neuron classes, functional connections between potentially connected cells are realized with less than 50% probability if the presynaptic cell is excitatory and generally a much greater probability if it is inhibitory. We also find that constraining the overall weight of presynaptic connections leads to Gaussian connection weight distributions that are truncated at zero. In contrast, constraining the total number of functional presynaptic connections leads to non-Gaussian distributions, in which weak connections are absent. These theoretical predictions are compared with a large dataset of published experimental studies reporting amplitudes of unitary postsynaptic potentials and probabilities of connections between various classes of excitatory and inhibitory neurons in the cerebellum, neocortex, and hippocampus

    Memory and information processing in neuromorphic systems

    Full text link
    A striking difference between brain-inspired neuromorphic processors and current von Neumann processors architectures is the way in which memory and processing is organized. As Information and Communication Technologies continue to address the need for increased computational power through the increase of cores within a digital processor, neuromorphic engineers and scientists can complement this need by building processor architectures where memory is distributed with the processing. In this paper we present a survey of brain-inspired processor architectures that support models of cortical networks and deep neural networks. These architectures range from serial clocked implementations of multi-neuron systems to massively parallel asynchronous ones and from purely digital systems to mixed analog/digital systems which implement more biological-like models of neurons and synapses together with a suite of adaptation and learning mechanisms analogous to the ones found in biological nervous systems. We describe the advantages of the different approaches being pursued and present the challenges that need to be addressed for building artificial neural processing systems that can display the richness of behaviors seen in biological systems.Comment: Submitted to Proceedings of IEEE, review of recently proposed neuromorphic computing platforms and system

    Structural Plasticity and Associative Memory in Balanced Neural Networks With Spike-Time Dependent Inhibitory Plasticity

    Get PDF
    Several homeostatic mechanisms enable the brain to maintain desired levels of neuronal activity. One of these, homeostatic structural plasticity, has been reported to restore activity in networks disrupted by peripheral lesions by altering their neuronal connectivity. While multiple lesion experiments have studied the changes in neurite morphology that underlie modifications of synapses in these networks, the underlying mechanisms that drive these changes and the effects of the altered connectivity on network function are yet to be explained. Experimental evidence suggests that neuronal activity modulates neurite morphology and that it may stimulate neurites to selectively sprout or retract to restore network activity levels. In this study, a new spiking network model was developed to investigate these activity dependent growth regimes of neurites. Simulations of the model accurately reproduce network rewiring after peripheral lesions as reported in experiments. To ensure that these simulations closely resembled the behaviour of networks in the brain, a biologically realistic network model that exhibits low frequency Asynchronous Irregular (AI) activity as observed in cerebral cortex was deafferented. Furthermore, to study the functional effects of peripheral lesioning and subsequent network repair by homeostatic structural plasticity, associative memories were stored in the network and their recall performances before deafferentation and after, during the repair process, were compared. The simulation results indicate that the re-establishment of activity in neurons both within and outside the deprived region, the Lesion Projection Zone (LPZ), requires opposite activity dependent growth rules for excitatory and inhibitory post-synaptic elements. Analysis of these growth regimes indicates that they also contribute to the maintenance of activity levels in individual neurons. In this model, the directional formation of synapses that is observed in experiments requires that pre-synaptic excitatory and inhibitory elements also follow opposite growth rules. Furthermore, it was observed that the proposed model of homeostatic structural plasticity and the inhibitory synaptic plasticity mechanism that also balances the AI network are both necessary for successful rewiring. Next, even though average activity was restored to deprived neurons, these neurons did not retain their AI firing characteristics after repair. Finally, the recall performance of associative memories, which deteriorated after deafferentation, was not restored after network reorganisation

    Experience-driven formation of parts-based representations in a model of layered visual memory

    Get PDF
    Growing neuropsychological and neurophysiological evidence suggests that the visual cortex uses parts-based representations to encode, store and retrieve relevant objects. In such a scheme, objects are represented as a set of spatially distributed local features, or parts, arranged in stereotypical fashion. To encode the local appearance and to represent the relations between the constituent parts, there has to be an appropriate memory structure formed by previous experience with visual objects. Here, we propose a model how a hierarchical memory structure supporting efficient storage and rapid recall of parts-based representations can be established by an experience-driven process of self-organization. The process is based on the collaboration of slow bidirectional synaptic plasticity and homeostatic unit activity regulation, both running at the top of fast activity dynamics with winner-take-all character modulated by an oscillatory rhythm. These neural mechanisms lay down the basis for cooperation and competition between the distributed units and their synaptic connections. Choosing human face recognition as a test task, we show that, under the condition of open-ended, unsupervised incremental learning, the system is able to form memory traces for individual faces in a parts-based fashion. On a lower memory layer the synaptic structure is developed to represent local facial features and their interrelations, while the identities of different persons are captured explicitly on a higher layer. An additional property of the resulting representations is the sparseness of both the activity during the recall and the synaptic patterns comprising the memory traces.Comment: 34 pages, 12 Figures, 1 Table, published in Frontiers in Computational Neuroscience (Special Issue on Complex Systems Science and Brain Dynamics), http://www.frontiersin.org/neuroscience/computationalneuroscience/paper/10.3389/neuro.10/015.2009

    A History of Spike-Timing-Dependent Plasticity

    Get PDF
    How learning and memory is achieved in the brain is a central question in neuroscience. Key to today’s research into information storage in the brain is the concept of synaptic plasticity, a notion that has been heavily influenced by Hebb's (1949) postulate. Hebb conjectured that repeatedly and persistently co-active cells should increase connective strength among populations of interconnected neurons as a means of storing a memory trace, also known as an engram. Hebb certainly was not the first to make such a conjecture, as we show in this history. Nevertheless, literally thousands of studies into the classical frequency-dependent paradigm of cellular learning rules were directly inspired by the Hebbian postulate. But in more recent years, a novel concept in cellular learning has emerged, where temporal order instead of frequency is emphasized. This new learning paradigm – known as spike-timing-dependent plasticity (STDP) – has rapidly gained tremendous interest, perhaps because of its combination of elegant simplicity, biological plausibility, and computational power. But what are the roots of today’s STDP concept? Here, we discuss several centuries of diverse thinking, beginning with philosophers such as Aristotle, Locke, and Ribot, traversing, e.g., Lugaro’s plasticità and Rosenblatt’s perceptron, and culminating with the discovery of STDP. We highlight interactions between theoretical and experimental fields, showing how discoveries sometimes occurred in parallel, seemingly without much knowledge of the other field, and sometimes via concrete back-and-forth communication. We point out where the future directions may lie, which includes interneuron STDP, the functional impact of STDP, its mechanisms and its neuromodulatory regulation, and the linking of STDP to the developmental formation and continuous plasticity of neuronal networks

    Regulation of circuit organization and function through inhibitory synaptic plasticity

    Get PDF
    Diverse inhibitory neurons in the mammalian brain shape circuit connectivity and dynamics through mechanisms of synaptic plasticity. Inhibitory plasticity can establish excitation/inhibition (E/I) balance, control neuronal firing, and affect local calcium concentration, hence regulating neuronal activity at the network, single neuron, and dendritic level. Computational models can synthesize multiple experimental results and provide insight into how inhibitory plasticity controls circuit dynamics and sculpts connectivity by identifying phenomenological learning rules amenable to mathematical analysis. We highlight recent studies on the role of inhibitory plasticity in modulating excitatory plasticity, forming structured networks underlying memory formation and recall, and implementing adaptive phenomena and novelty detection. We conclude with experimental and modeling progress on the role of interneuron-specific plasticity in circuit computation and context-dependent learning

    Two computational neural models : rodent perirhinal cortex and crab cardiac ganglion

    Get PDF
    Neural engineering research has been rapidly growing in prominence in the past two decades, with 'reverse engineer the brain' listed as one of the 14 grand challenges outlined by the National Academy of Engineering. The computational aspect of reverse engineering includes a study of how both single neurons and networks of neurons integrate diverse signals from both the environment and from within the animal and make complex decisions. Since there are many limitations on the experiments that can be performed in alive or isolated biological systems, there is a need of standalone computational models which can help perform 'in silico' experiments. This dissertation focuses on such 'in silico' neuronal models to predict underlying mechanisms of governing interactions and robustness. The first model investigated is that of a rodent perirhinal cortex area 36 (PRC), and its role in associative memory formation. A large-scale 520 cell biophysical model of the PRC was developed using biological data from the literature. We then used the model to shed light on the mechanisms that support associative memory in the perirhinal network. These analyses revealed that perirhinal associative plasticity is critically dependent on a specific subset of neurons, termed conjunctive cells. When the model network was trained with spatially distributed but coincident neocortical inputs, these conjunctive cells acquired excitatory responses to the paired neocortical inputs and conveyed them to widely distributed perirhinal sites via longitudinal projections. Ablation of conjunctive cells during recall abolished expression of the associative memory. The second model focuses on a model for crab cardiac system consisting of five Large Cells (LC) developed using firsthand biological data. The model is then used to study the features of its underlying oscillation in its membrane potential during a rhythm and to reverse engineer the experimentally discovered phenomenon related to network synchrony. The model predicted multiple mechanisms of compensation to restore network synchrony based on compensatory intrinsic conductances. Finally, a third model, related to the second one, was of an improved three-compartmental biophysical model of an LC that is morphologically realistic and includes provision for inputs from the SCs. To determine viable LC models, maximal conductances in three compartments of an LC are determined by random sampling from a biologically characterized 9D-parameter space, followed by a three-stage rejection protocol that checks for conformity with features in experimental single cell traces. Random LC models that pass the single cell rejection protocol are then incorporated into a network model followed by a final rejection protocol stage. Using disparate experimental data, the study provides hitherto unknown structure-function insights related to the crustacean cardiac ganglion large cell, including the differential roles of active conductances in the three compartments. The novel morphological architecture for the large cell was validated using biological data and used to make predictions about function. A testable prediction related to function was that active conductances, specifically, the persistent sodium current, is required in the neurite to transmit the spike waveforms from the spike initiation zone to the soma. Another pertains to the co-variation of maximal conductances of the persistent sodium current with that of the leak current

    What is memory? The present state of the engram

    Get PDF
    The mechanism of memory remains one of the great unsolved problems of biology. Grappling with the question more than a hundred years ago, the German zoologist Richard Semon formulated the concept of the engram, lasting connections in the brain that result from simultaneous "excitations", whose precise physical nature and consequences were out of reach of the biology of his day. Neuroscientists now have the knowledge and tools to tackle this question, however, and this Forum brings together leading contemporary views on the mechanisms of memory and what the engram means today

    Über die Selbstorganisation einer hierarchischen GedĂ€chtnisstruktur fĂŒr kompositionelle ObjektreprĂ€sentation im visuellen Kortex

    Get PDF
    At present, there is a huge lag between the artificial and the biological information processing systems in terms of their capability to learn. This lag could be certainly reduced by gaining more insight into the higher functions of the brain like learning and memory. For instance, primate visual cortex is thought to provide the long-term memory for the visual objects acquired by experience. The visual cortex handles effortlessly arbitrary complex objects by decomposing them rapidly into constituent components of much lower complexity along hierarchically organized visual pathways. How this processing architecture self-organizes into a memory domain that employs such compositional object representation by learning from experience remains to a large extent a riddle. The study presented here approaches this question by proposing a functional model of a self-organizing hierarchical memory network. The model is based on hypothetical neuronal mechanisms involved in cortical processing and adaptation. The network architecture comprises two consecutive layers of distributed, recurrently interconnected modules. Each module is identified with a localized cortical cluster of fine-scale excitatory subnetworks. A single module performs competitive unsupervised learning on the incoming afferent signals to form a suitable representation of the locally accessible input space. The network employs an operating scheme where ongoing processing is made of discrete successive fragments termed decision cycles, presumably identifiable with the fast gamma rhythms observed in the cortex. The cycles are synchronized across the distributed modules that produce highly sparse activity within each cycle by instantiating a local winner-take-all-like operation. Equipped with adaptive mechanisms of bidirectional synaptic plasticity and homeostatic activity regulation, the network is exposed to natural face images of different persons. The images are presented incrementally one per cycle to the lower network layer as a set of Gabor filter responses extracted from local facial landmarks. The images are presented without any person identity labels. In the course of unsupervised learning, the network creates simultaneously vocabularies of reusable local face appearance elements, captures relations between the elements by linking associatively those parts that encode the same face identity, develops the higher-order identity symbols for the memorized compositions and projects this information back onto the vocabularies in generative manner. This learning corresponds to the simultaneous formation of bottom-up, lateral and top-down synaptic connectivity within and between the network layers. In the mature connectivity state, the network holds thus full compositional description of the experienced faces in form of sparse memory traces that reside in the feed-forward and recurrent connectivity. Due to the generative nature of the established representation, the network is able to recreate the full compositional description of a memorized face in terms of all its constituent parts given only its higher-order identity symbol or a subset of its parts. In the test phase, the network successfully proves its ability to recognize identity and gender of the persons from alternative face views not shown before. An intriguing feature of the emerging memory network is its ability to self-generate activity spontaneously in absence of the external stimuli. In this sleep-like off-line mode, the network shows a self-sustaining replay of the memory content formed during the previous learning. Remarkably, the recognition performance is tremendously boosted after this off-line memory reprocessing. The performance boost is articulated stronger on those face views that deviate more from the original view shown during the learning. This indicates that the off-line memory reprocessing during the sleep-like state specifically improves the generalization capability of the memory network. The positive effect turns out to be surprisingly independent of synapse-specific plasticity, relying completely on the synapse-unspecific, homeostatic activity regulation across the memory network. The developed network demonstrates thus functionality not shown by any previous neuronal modeling approach. It forms and maintains a memory domain for compositional, generative object representation in unsupervised manner through experience with natural visual images, using both on- ("wake") and off-line ("sleep") learning regimes. This functionality offers a promising departure point for further studies, aiming for deeper insight into the learning mechanisms employed by the brain and their consequent implementation in the artificial adaptive systems for solving complex tasks not tractable so far.GegenwĂ€rtig besteht immer noch ein enormer Abstand zwischen der LernfĂ€higkeit von kĂŒnstlichen und biologischen Informationsverarbeitungssystemen. Dieser Abstand ließe sich durch eine bessere Einsicht in die höheren Funktionen des Gehirns wie Lernen und GedĂ€chtnis verringern. Im visuellen Kortex etwa werden die Objekte innerhalb kĂŒrzester Zeit entlang der hierarchischen Verarbeitungspfade in ihre Bestandteile zerlegt und so durch eine Komposition von Elementen niedrigerer KomplexitĂ€t dargestellt. Bereits bekannte Objekte werden so aus dem LangzeitgedĂ€chtnis abgerufen und wiedererkannt. Wie eine derartige kompositionell-hierarchische GedĂ€chtnisstruktur durch die visuelle Erfahrung zustande kommen kann, ist noch weitgehend ungeklĂ€rt. Um dieser Frage nachzugehen, wird hier ein funktionelles Modell eines lernfĂ€higen rekurrenten neuronalen Netzwerkes vorgestellt. Im Netzwerk werden neuronale Mechanismen implementiert, die der kortikalen Verarbeitung und PlastizitĂ€t zugrunde liegen. Die hierarchische Architektur des Netzwerkes besteht aus zwei nacheinander geschalteten Schichten, die jede eine Anzahl von verteilten, rekurrent vernetzten Modulen beherbergen. Ein Modul umfasst dabei mehrere funktionell separate Subnetzwerke. Jedes solches Modul ist imstande, aus den eintreffenden Signalen eine geeignete ReprĂ€sentation fĂŒr den lokalen Eingaberaum unĂŒberwacht zu lernen. Die fortlaufende Verarbeitung im Netzwerk setzt sich zusammen aus diskreten Fragmenten, genannt Entscheidungszyklen, die man mit den schnellen kortikalen Rhythmen im gamma-Frequenzbereich in Verbindung setzen kann. Die Zyklen sind synchronisiert zwischen den verteilten Modulen. Innerhalb eines Zyklus wird eine lokal umgrenzte winner-take-all-Ă€hnliche Operation in Modulen durchgefĂŒhrt. Die KompetitionsstĂ€rke wĂ€chst im Laufe des Zyklus an. Diese Operation aktiviert in AbhĂ€ngigkeit von den Eingabesignalen eine sehr kleine Anzahl von Einheiten und verstĂ€rkt sie auf Kosten der anderen, um den dargebotenen Reiz in der NetzwerkaktivitĂ€t abzubilden. Ausgestattet mit adaptiven Mechanismen der bidirektionalen synaptischen PlastizitĂ€t und der homöostatischen AktivitĂ€tsregulierung, erhĂ€lt das Netzwerk natĂŒrliche Gesichtsbilder von verschiedenen Personen dargeboten. Die Bilder werden der unteren Netzwerkschicht, je ein Bild pro Zyklus, als Ansammlung von Gaborfilterantworten aus lokalen Gesichtslandmarken zugefĂŒhrt, ohne Information ĂŒber die PersonenidentitĂ€t zur VerfĂŒgung zu stellen. Im Laufe der unĂŒberwachten Lernprozedur formt das Netzwerk die Verbindungsstruktur derart, dass die Gesichter aller dargebotenen Personen im Netzwerk in Form von dĂŒnn besiedelten GedĂ€chtnisspuren abgelegt werden. Hierzu werden gleichzeitig vorwĂ€rtsgerichtete (bottom-up) und rekurrente (lateral, top-down) synaptische Verbindungen innerhalb und zwischen den Schichten gelernt. Im reifen Verbindungszustand werden infolge dieses Lernens die einzelnen Gesichter als Komposition ihrer Bestandteile auf generative Art gespeichert. Dank der generativen Art der gelernten Struktur reichen schon allein das höhere IdentitĂ€tssymbol oder eine kleine Teilmenge von zugehörigen Gesichtselementen, um alle Bestandteile der gespeicherten Gesichter aus dem GedĂ€chtnis abzurufen. In der Testphase kann das Netzwerk erfolgreich sowohl die IdentitĂ€t als auch das Geschlecht von Personen aus vorher nicht gezeigten Gesichtsansichten erkennen. Eine bemerkenswerte Eigenschaft der entstandenen GedĂ€chtnisarchitektur ist ihre FĂ€higkeit, ohne Darbietung von externen Stimuli spontan AktivitĂ€tsmuster zu generieren und die im GedĂ€chtnis abgelegten Inhalte in diesem schlafĂ€hnlichen "off-line" Regime wiederzugeben. Interessanterweise ergibt sich aus der Schlafphase ein direkter Vorteil fĂŒr die GedĂ€chtnisfunktion. Dieser Vorteil macht sich durch eine drastisch verbesserte Erkennungsrate nach der Schlafphase bemerkbar, wenn das Netwerk mit den zuvor nicht dargebotenen Ansichten von den bereits bekannten Personen konfrontiert wird. Die Leistungsverbesserung nach der Schlafphase ist umso deutlicher, je stĂ€rker die Alternativansichten vom Original abweichen. Dieser positive Effekt ist zudem komplett unabhĂ€ngig von der synapsenspezifischen PlastizitĂ€t und kann allein durch die synapsenunspezifische, homöostatische Regulation der AktivitĂ€t im Netzwerk erklĂ€rt werden. Das entwickelte Netzwerk demonstriert so eine im Bereich der neuronalen Modellierung bisher nicht gezeigte FunktionalitĂ€t. Es kann unĂŒberwacht eine GedĂ€chtnisdomĂ€ne fĂŒr kompositionelle, generative ObjektreprĂ€sentation durch die Erfahrung mit natĂŒrlichen Bildern sowohl im reizgetriebenen, wachĂ€hnlichen Zustand als auch im reizabgekoppelten, schlafĂ€hnlichen Zustand formen und verwalten. Diese FunktionalitĂ€t bietet einen vielversprechenden Ausgangspunkt fĂŒr weitere Studien, die die neuronalen Lernmechanismen des Gehirns ins Visier nehmen und letztendlich deren konsequente Umsetzung in technischen, adaptiven Systemen anstreben
    • 

    corecore