1,382 research outputs found

    Working memory dynamics and spontaneous activity in a flip-flop oscillations network model with a Milnor attractor

    Get PDF
    Many cognitive tasks require the ability to maintain and manipulate simultaneously several chunks of information. Numerous neurobiological observations have reported that this ability, known as the working memory, is associated with both a slow oscillation (leading to the up and down states) and the presence of the theta rhythm. Furthermore, during resting state, the spontaneous activity of the cortex exhibits exquisite spatiotemporal patterns sharing similar features with the ones observed during specific memory tasks. Here to enlighten neural implication of working memory under these complicated dynamics, we propose a phenomenological network model with biologically plausible neural dynamics and recurrent connections. Each unit embeds an internal oscillation at the theta rhythm which can be triggered during up-state of the membrane potential. As a result, the resting state of a single unit is no longer a classical fixed point attractor but rather the Milnor attractor, and multiple oscillations appear in the dynamics of a coupled system. In conclusion, the interplay between the up and down states and theta rhythm endows high potential in working memory operation associated with complexity in spontaneous activities

    Variable binding by synaptic strength change

    Get PDF
    Variable binding is a difficult problem for neural networks. Two new mechanisms for binding by synaptic change are presented, and in both, bindings are erased and can be reused. The first is based on the commonly used learning mechanism of permanent change of synaptic weight, and the second on synaptic change which decays. Both are biologically motivated models. Simulations of binding on a paired association task are shown with the first mechanism succeeding with a 97.5% F-Score, and the second performing perfectly. Further simulations show that binding by decaying synaptic change copes with cross talk, and can be used for compositional semantics. It can be inferred that binding by permanent change accounts for these, but it faces the stability plasticity dilemma. Two other existing binding mechanism, synchrony and active links, are compatible with these new mechanisms. All four mechanisms are compared and integrated in a Cell Assembly theory

    A neurobiologically constrained cortex model of semantic grounding with spiking neurons and brain-like connectivity

    Get PDF
    One of the most controversial debates in cognitive neuroscience concerns the cortical locus of semantic knowledge and processing in the human brain. Experimental data revealed the existence of various cortical regions that become differentially active during meaning processing, ranging from semantic hubs (which bind different types of meaning together) to modality-specific sensorimotor areas, involved in specific conceptual categories. Why and how the brain uses such complex organization for conceptualization can be investigated using biologically constrained neurocomputational models. Here, we apply a spiking neuron model mimicking structure and connectivity of frontal, temporal and occipital areas to simulate semantic learning and symbol grounding in action and perception. As a result of Hebbian learning of the correlation structure of symbol, perception and action information, distributed cell assembly circuits emerged across various cortices of the network. These semantic circuits showed category-specific topographical distributions, reaching into motor and visual areas for action- and visually-related words, respectively. All types of semantic circuits included large numbers of neurons in multimodal connector hub areas, which is explained by cortical connectivity structure and the resultant convergence of phonological and semantic information on these zones. Importantly, these semantic hub areas exhibited some category-specificity, which was less pronounced than that observed in primary and secondary modality-preferential cortices. The present neurocomputational model integrates seemingly divergent experimental results about conceptualization and explains both semantic hubs and category-specific areas as an emergent process causally determined by two major factors: neuroanatomical connectivity structure and correlated neuronal activation during language learning

    A Neurobiologically Constrained Model

    Get PDF
    Understanding the meaning of words and its relationship with the outside world involves higher cognitive processes unique of the human brain. Despite many decades of research on the neural substrates of semantic processing, a consensus about the functions and components of the semantic system has not been reached among cognitive neuroscientists. This issue is mainly influenced by two sets of neurocognitive empirical findings that have shown (i) the existence of several regions acting as ’semantic hubs’, where the meaning of all types of words is processed and (ii) the presence of other cortical regions specialised for the processing of specific semantic word categories, such as animals, tools, or actions. Further evidence on semantic meaning processing comes from neuroimaging and transcranial magnetic stimulation studies in visually deprived population that acquires semantic knowledge through non-sensory modalities. These studies have documented massive neural changes in the visual system that is in turn recruited for linguistic and semantic processing. On this basis, this dissertation investigates the neurobiological mechanism that enables humans to acquire, store and processes linguistics meaning by means of a neurobiologically constrained neural network and offers an answer to the following hotly debated questions: Why both semantic hubs and modality-specific regions are involved in semantic meaning processing in the brain? Which biological principles are critical for the emergence of semantics at the microstructural neural level and how is the semantic system implemented under deprived conditions, in particular in congenitally blind people? First, a neural network model closely replicating the anatomical and physiological features of the human cortex was designed. At the micro level, the network was composed of 15,000 artificial neurons; at the large-scale level, there were 12 areas representing the frontal, temporal, and occipital lobes relevant for linguistic and semantic processing. The connectivity structure linking the different cortical areas was purely based on neuroanatomical evidence. Two models were used, each simulating the same set of cortical regions but at different level of details: one adopted a simple connectivity structure with a mean-field approach (i.e. graded-response neurons), and the other used a fully connected model with adaptation-based spiking cells. Second, the networks were used to simulate the process of learning semantic relationships between word-forms, specific object perceptions, and motor movements of the own body in deprived and undeprived visual condition. As a result of Hebbian correlated learning, distributed word-related cell assembly circuits spontaneously emerged across the different cortical semantic areas exhibiting different topographical distribution. Third, the network was reactivated with the learned auditory patterns (simulating word recognition processes) to investigate the temporal dynamics of cortical semantic activation and compare them with real brain responses. In summary, the findings of the present work demonstrate that meaningful linguistic units are represented in the brain in the form of cell assemblies that are distributed over both semantic hubs and category-specific regions spontaneously emerged through the mutual interaction of a single set of biological mechanisms acting within specific neuroanatomical structures. These biological principles acting together also offer an explanation of the mechanisms underlying massive neural changes in the visual cortex for language processing caused by blindness. The present work is a first step in better understanding the building blocks of language and semantic processing in sighted and blind populations by translating biological principles that govern human cognition into precise mathematical neural networks of the human brain.Um die Bedeutung von Wörtern und ihre Beziehung zur Außenwelt zu verstehen, mĂŒssen die kognitiven Prozesse betrachtet werden, die einzigartig fĂŒr das menschliche Gehirn sind. Trotz jahrzehntelanger Forschungen an den neuronalen Substraten der semantischen Verarbeitung im menschlichen Gehirn wurde bisher kein Konsens ĂŒber die Funktionen und Komponenten des semantischen Systems in den kognitiven Neurowissenschaftlern erreicht. Dieses Problem grĂŒndet darin, dass neurokognitive empirische Studien zumeist zu zwei Endergebnissen kamen: (i) der Existenz von mehrere Regionen, die als ‘semantische Hubs’ fungieren, in denen die Bedeutung aller Wortarten verarbeitet wird, und (ii) dem Vorhandensein weiterer kortikaler Regionen, die auf die Verarbeitung spezifischer semantischer Kategorien wie Tiere, Werkzeuge oder Aktionswörtern spezialisiert sind. Ein weiterer Beweis fĂŒr die Verarbeitung semantischer Bedeutungen lĂ€sst sich aus Bildgebungsstudien und Studien mit transkranialer Magnetstimulation an visuell benachteiligten Probanden entnehmen, die die linguistische Bedeutung nicht durch sensorische ModalitĂ€ten erwerben. Diese Studien konnten massive neuronale VerĂ€nderungen im visuellen System dokumentieren, die wiederum fĂŒr die sprachliche und semantische Verarbeitung verwendet werden. Die vorliegende Dissertation untersucht mittels eines biologischen neuronalen Netzwerkes jene kognitiven Prozesse, die es Menschen ermöglichen, linguistische Bedeutungen in der tĂ€glichen Kommunikation zu erfassen, zu speichern und zu verarbeiten. Sie schlĂ€gt Antworten auf die folgenden neurowissenschaftlich heiß diskutierten Fragen vor: Warum sind sowohl semantische Hubs als auch modalitĂ€tsspezifische Regionen relevant fĂŒr die sprachliche und semantische Informationsverarbeitung im Gehirn? Welche biologischen Prinzipien sind von entscheidender Bedeutung fĂŒr die Entstehung von Semantik auf mikrostruktureller neuronaler Ebene? Und Wie ist das semantische System unter benachteiligten Bedingungen reprĂ€sentiert? ZunĂ€chst wurde ein neuronales Netzwerkmodell implementiert, das die anatomischen und physiologischen Merkmale des menschlichen Kortex prĂ€zise widerspiegelt. Auf der Mikroebene besteht das Netzwerkmodel aus 15.000 kĂŒnstlichen Neuronen, auf der Großebene aus 12 Arealen der Frontal-, Temporal- und Okzipitallappen, die fĂŒr die sprachliche und semantische Verarbeitung relevant sind. Die Verbindungsstruktur zwischen den verschiedenen kortikalen Arealen wurde rein auf Grundlage von neuroanatomischen Befunden implementiert. Zwei Modelle wurden verwendet, die jeweils die gleichen kortikalen Regionen simulierten, allerdings in verschiedenen Varianten: Das erste Modell ging von einer einfachen KonnektivitĂ€tsstruktur mit einem Mean-field Ansatz (graded-response neurons) aus, wĂ€hrend das zweite einen vollstĂ€ndig verbundenen Aufbau mit adaptionsbasierten Spiking-Zellen (Aktionspotential) verwendete. Anschließend dienten die neuronalen Netzwerke dazu, den Lernprozess der semantischen Verlinkung zwischen Wortformen, bestimmten Objektwahrnehmungen und motorischen Bewegungen des eigenen Körpers zu simulieren, sowohl in gesundem als auch in benachteiligtem Sehzustand. Als Ergebnis des Hebbschen Korrelationslernens traten spontan verteilte Neuronenverbindungen (cell assemblies) in den verschiedenen kortikalen semantischen Bereichen auf, die unterschiedliche topografische Verteilungen zeigten. Zuletzt wurde das Netzwerkmodell mit den erlernten auditorischen Mustern reaktiviert (Worterkennungsprozesse), um die zeitliche Dynamik kortikaler semantischer Aktivierung zu untersuchen und sie mit realen Gehirnantworten zu vergleichen. Die vorliegende Arbeit kam zu folgenden Ergebnissen: Die neuronale ReprĂ€sentation linguistischer Bedeutung wird im Gehirn in Form von cell assemblies dargestellt, welche ĂŒber semantische Hubs und modalitĂ€tsspezifische Regionen verteilt sind. Diese entstehen spontan durch die Interaktion einer Reihe von biologischen Mechanismen, die innerhalb spezifischer neuroanatomischer Strukturen wirken. Das Zusammenwirken dieser biologischen Prinzipien bietet zusĂ€tzlich eine ErklĂ€rung fĂŒr jene Faktoren, die fĂŒr die massiven neuronalen VerĂ€nderungen in der sprachlichen und semantischen Netzwerke bei Blindheit verantwortlich sind. Die in dieser Dissertation dokumentierten Studien sind ein erster Schritt in Richtung eines besseren VerstĂ€ndnisses der sprachlichen und semantischen Informationsverarbeitung bei sehenden und blinden Menschen, basierend auf einer Übersetzung der biologischen Prinzipien der menschlichen Kognition in prĂ€zise mathematische neuronale Netzwerke des menschlichen Gehirns

    Memory capacity in the hippocampus

    Get PDF
    Neural assemblies in hippocampus encode positions. During rest, the hippocam- pus replays sequences of neural activity seen during awake behavior. This replay is linked to memory consolidation and mental exploration of the environment. Re- current networks can be used to model the replay of sequential activity. Multiple sequences can be stored in the synaptic connections. To achieve a high mem- ory capacity, recurrent networks require a pattern separation mechanism. Such a mechanism is global remapping, observed in place cell populations. A place cell fires at a particular position of an environment and is silent elsewhere. Multiple place cells usually cover an environment with their firing fields. Small changes in the environment or context of a behavioral task can cause global remapping, i.e. profound changes in place cell firing fields. Global remapping causes some cells to cease firing, other silent cells to gain a place field, and other place cells to move their firing field and change their peak firing rate. The effect is strong enough to make global remapping a viable pattern separation mechanism. We model two mechanisms that improve the memory capacity of recurrent net- works. The effect of inhibition on replay in a recurrent network is modeled using binary neurons and binary synapses. A mean field approximation is used to de- termine the optimal parameters for the inhibitory neuron population. Numerical simulations of the full model were carried out to verify the predictions of the mean field model. A second model analyzes a hypothesized global remapping mecha- nism, in which grid cell firing is used as feed forward input to place cells. Grid cells have multiple firing fields in the same environment, arranged in a hexagonal grid. Grid cells can be used in a model as feed forward inputs to place cells to produce place fields. In these grid-to-place cell models, shifts in the grid cell firing patterns cause remapping in the place cell population. We analyze the capacity of such a system to create sets of separated patterns, i.e. how many different spatial codes can be generated. The limiting factor are the synapses connecting grid cells to place cells. To assess their capacity, we produce different place codes in place and grid cell populations, by shuffling place field positions and shifting grid fields of grid cells. Then we use Hebbian learning to increase the synaptic weights be- tween grid and place cells for each set of grid and place code. The capacity limit is reached when synaptic interference makes it impossible to produce a place code with sufficient spatial acuity from grid cell firing. Additionally, it is desired to also maintain the place fields compact, or sparse if seen from a coding standpoint. Of course, as more environments are stored, the sparseness is lost. Interestingly, place cells lose the sparseness of their firing fields much earlier than their spatial acuity. For the sequence replay model we are able to increase capacity in a simulated recurrent network by including an inhibitory population. We show that even in this more complicated case, capacity is improved. We observe oscillations in the average activity of both excitatory and inhibitory neuron populations. The oscillations get stronger at the capacity limit. In addition, at the capacity limit, rather than observing a sudden failure of replay, we find sequences are replayed transiently for a couple of time steps before failing. Analyzing the remapping model, we find that, as we store more spatial codes in the synapses, first the sparseness of place fields is lost. Only later do we observe a decay in spatial acuity of the code. We found two ways to maintain sparse place fields while achieving a high capacity: inhibition between place cells, and partitioning the place cell population so that learning affects only a small fraction of them in each environment. We present scaling predictions that suggest that hundreds of thousands of spatial codes can be produced by this pattern separation mechanism. The effect inhibition has on the replay model is two-fold. Capacity is increased, and the graceful transition from full replay to failure allows for higher capacities when using short sequences. Additional mechanisms not explored in this model could be at work to concatenate these short sequences, or could perform more complex operations on them. The interplay of excitatory and inhibitory populations gives rise to oscillations, which are strongest at the capacity limit. The oscillation draws a picture of how a memory mechanism can cause hippocampal oscillations as observed in experiments. In the remapping model we showed that sparseness of place cell firing is constraining the capacity of this pattern separation mechanism. Grid codes outperform place codes regarding spatial acuity, as shown in Mathis et al. (2012). Our model shows that the grid-to-place transformation is not harnessing the full spatial information from the grid code in order to maintain sparse place fields. This suggests that the two codes are independent, and communication between the areas might be mostly for synchronization. High spatial acuity seems to be a specialization of the grid code, while the place code is more suitable for memory tasks. In a detailed model of hippocampal replay we show that feedback inhibition can increase the number of sequences that can be replayed. The effect of inhibition on capacity is determined using a meanfield model, and the results are verified with numerical simulations of the full network. Transient replay is found at the capacity limit, accompanied by oscillations that resemble sharp wave ripples in hippocampus. In a second model Hippocampal replay of neuronal activity is linked to memory consolidation and mental exploration. Furthermore, replay is a potential neural correlate of episodic memory. To model hippocampal sequence replay, recurrent neural networks are used. Memory capacity of such networks is of great interest to determine their biological feasibility. And additionally, any mechanism that improves capacity has explanatory power. We investigate two such mechanisms. The first mechanism to improve capacity is global, unspecific feedback inhibition for the recurrent network. In a simplified meanfield model we show that capacity is indeed improved. The second mechanism that increases memory capacity is pattern separation. In the spatial context of hippocampal place cell firing, global remapping is one way to achieve pattern separation. Changes in the environment or context of a task cause global remapping. During global remapping, place cell firing changes in unpredictable ways: cells shift their place fields, or fully cease firing, and formerly silent cells acquire place fields. Global remapping can be triggered by subtle changes in grid cells that give feed-forward inputs to hippocampal place cells. We investigate the capacity of the underlying synaptic connections, defined as the number of different environments that can be represented at a given spatial acuity. We find two essential conditions to achieve a high capacity and sparse place fields: inhibition between place cells, and partitioning the place cell population so that learning affects only a small fraction of them in each environments. We also find that sparsity of place fields is the constraining factor of the model rather than spatial acuity. Since the hippocampal place code is sparse, we conclude that the hippocampus does not fully harness the spatial information available in the grid code. The two codes of space might thus serve different purposes

    Learning in clustered spiking networks

    Get PDF
    Neurons spike on a millisecond time scale while behaviour typically spans hundreds of milliseconds to seconds and longer. Neurons have to bridge this time gap when computing and learning behaviours of interest. Recent computational work has shown that neural circuits can bridge this time gap when connected in specific ways. Moreover, the connectivity patterns can develop using plasticity rules typically considered to be biologically plausible. In this thesis, we focus on one type of connectivity where excitatory neurons are grouped in clusters. Strong recurrent connectivity within the clusters reverberates the activity and prolongs the time scales in the network. This way, the clusters of neurons become the basic functional units of the circuit, in line with an increasing number of experimental studies. We study a general architecture where plastic synapses connect the clustered network to a read-out network. We demonstrate the usefulness of this architecture for two different problems: 1) learning and replaying sequences; 2) learning statistical structure. The time scales in both problems range from hundreds of milliseconds to seconds and we address the problems through simulation and analysis of spiking networks. We show that the clustered organization circumvents the need for non-bio-plausible mathematical optimizations and instead allows the use of unsupervised spike-timing-dependent plasticity rules. Additionally, we make qualitative links to experimental findings and predictions for both problems studied. Finally, we speculate about future directions that could extend upon our findings.Open Acces

    From sensorimotor learning to memory cells in prefrontal and temporal association cortex: A neurocomputational study of disembodiment

    Get PDF
    Memory cells, the ultimate neurobiological substrates of working memory, remain active for several seconds and are most commonly found in prefrontal cortex and higher multisensory areas. However, if correlated activity in “embodied” sensorimotor systems underlies the formation of memory traces, why should memory cells emerge in areas distant from their antecedent activations in sensorimotor areas, thus leading to “disembodiment” (movement away from sensorimotor systems) of memory mechanisms? We modelled the formation of memory circuits in six-area neurocomputational architectures, implementing motor and sensory primary, secondary and higher association areas in frontotemporal cortices along with known between-area neuroanatomical connections. Sensorimotor learning driven by Hebbian neuroplasticity led to formation of cell assemblies distributed across the different areas of the network. These action-perception circuits (APCs) ignited fully when stimulated, thus providing a neural basis for long-term memory (LTM) of sensorimotor information linked by learning. Subsequent to ignition, activity vanished rapidly from APC neurons in sensorimotor areas but persisted in those in multimodal prefrontal and temporal areas. Such persistent activity provides a mechanism for working memory for actions, perceptions and symbols, including short-term phonological and semantic storage. Cell assembly ignition and “disembodied” working memory retreat of activity to multimodal areas are documented in the neurocomputational models' activity dynamics, at the level of single cells, circuits, and cortical areas. Memory disembodiment is explained neuromechanistically by APC formation and structural neuroanatomical features of the model networks, especially the central role of multimodal prefrontal and temporal cortices in bridging between sensory and motor areas. These simulations answer the “where” question of cortical working memory in terms of distributed APCs and their inner structure, which is, in part, determined by neuroanatomical structure. As the neurocomputational model provides a mechanistic explanation of how memory-related “disembodied” neuronal activity emerges in “embodied” APCs, it may be key to solving aspects of the embodiment debate and eventually to a better understanding of cognitive brain functions
    • 

    corecore