10 research outputs found

    A neurobiologically constrained cortex model of semantic grounding with spiking neurons and brain-like connectivity

    Get PDF
    One of the most controversial debates in cognitive neuroscience concerns the cortical locus of semantic knowledge and processing in the human brain. Experimental data revealed the existence of various cortical regions that become differentially active during meaning processing, ranging from semantic hubs (which bind different types of meaning together) to modality-specific sensorimotor areas, involved in specific conceptual categories. Why and how the brain uses such complex organization for conceptualization can be investigated using biologically constrained neurocomputational models. Here, we apply a spiking neuron model mimicking structure and connectivity of frontal, temporal and occipital areas to simulate semantic learning and symbol grounding in action and perception. As a result of Hebbian learning of the correlation structure of symbol, perception and action information, distributed cell assembly circuits emerged across various cortices of the network. These semantic circuits showed category-specific topographical distributions, reaching into motor and visual areas for action- and visually-related words, respectively. All types of semantic circuits included large numbers of neurons in multimodal connector hub areas, which is explained by cortical connectivity structure and the resultant convergence of phonological and semantic information on these zones. Importantly, these semantic hub areas exhibited some category-specificity, which was less pronounced than that observed in primary and secondary modality-preferential cortices. The present neurocomputational model integrates seemingly divergent experimental results about conceptualization and explains both semantic hubs and category-specific areas as an emergent process causally determined by two major factors: neuroanatomical connectivity structure and correlated neuronal activation during language learning

    A Neurobiologically Constrained Model

    Get PDF
    Understanding the meaning of words and its relationship with the outside world involves higher cognitive processes unique of the human brain. Despite many decades of research on the neural substrates of semantic processing, a consensus about the functions and components of the semantic system has not been reached among cognitive neuroscientists. This issue is mainly influenced by two sets of neurocognitive empirical findings that have shown (i) the existence of several regions acting as ’semantic hubs’, where the meaning of all types of words is processed and (ii) the presence of other cortical regions specialised for the processing of specific semantic word categories, such as animals, tools, or actions. Further evidence on semantic meaning processing comes from neuroimaging and transcranial magnetic stimulation studies in visually deprived population that acquires semantic knowledge through non-sensory modalities. These studies have documented massive neural changes in the visual system that is in turn recruited for linguistic and semantic processing. On this basis, this dissertation investigates the neurobiological mechanism that enables humans to acquire, store and processes linguistics meaning by means of a neurobiologically constrained neural network and offers an answer to the following hotly debated questions: Why both semantic hubs and modality-specific regions are involved in semantic meaning processing in the brain? Which biological principles are critical for the emergence of semantics at the microstructural neural level and how is the semantic system implemented under deprived conditions, in particular in congenitally blind people? First, a neural network model closely replicating the anatomical and physiological features of the human cortex was designed. At the micro level, the network was composed of 15,000 artificial neurons; at the large-scale level, there were 12 areas representing the frontal, temporal, and occipital lobes relevant for linguistic and semantic processing. The connectivity structure linking the different cortical areas was purely based on neuroanatomical evidence. Two models were used, each simulating the same set of cortical regions but at different level of details: one adopted a simple connectivity structure with a mean-field approach (i.e. graded-response neurons), and the other used a fully connected model with adaptation-based spiking cells. Second, the networks were used to simulate the process of learning semantic relationships between word-forms, specific object perceptions, and motor movements of the own body in deprived and undeprived visual condition. As a result of Hebbian correlated learning, distributed word-related cell assembly circuits spontaneously emerged across the different cortical semantic areas exhibiting different topographical distribution. Third, the network was reactivated with the learned auditory patterns (simulating word recognition processes) to investigate the temporal dynamics of cortical semantic activation and compare them with real brain responses. In summary, the findings of the present work demonstrate that meaningful linguistic units are represented in the brain in the form of cell assemblies that are distributed over both semantic hubs and category-specific regions spontaneously emerged through the mutual interaction of a single set of biological mechanisms acting within specific neuroanatomical structures. These biological principles acting together also offer an explanation of the mechanisms underlying massive neural changes in the visual cortex for language processing caused by blindness. The present work is a first step in better understanding the building blocks of language and semantic processing in sighted and blind populations by translating biological principles that govern human cognition into precise mathematical neural networks of the human brain.Um die Bedeutung von Wörtern und ihre Beziehung zur Außenwelt zu verstehen, mĂŒssen die kognitiven Prozesse betrachtet werden, die einzigartig fĂŒr das menschliche Gehirn sind. Trotz jahrzehntelanger Forschungen an den neuronalen Substraten der semantischen Verarbeitung im menschlichen Gehirn wurde bisher kein Konsens ĂŒber die Funktionen und Komponenten des semantischen Systems in den kognitiven Neurowissenschaftlern erreicht. Dieses Problem grĂŒndet darin, dass neurokognitive empirische Studien zumeist zu zwei Endergebnissen kamen: (i) der Existenz von mehrere Regionen, die als ‘semantische Hubs’ fungieren, in denen die Bedeutung aller Wortarten verarbeitet wird, und (ii) dem Vorhandensein weiterer kortikaler Regionen, die auf die Verarbeitung spezifischer semantischer Kategorien wie Tiere, Werkzeuge oder Aktionswörtern spezialisiert sind. Ein weiterer Beweis fĂŒr die Verarbeitung semantischer Bedeutungen lĂ€sst sich aus Bildgebungsstudien und Studien mit transkranialer Magnetstimulation an visuell benachteiligten Probanden entnehmen, die die linguistische Bedeutung nicht durch sensorische ModalitĂ€ten erwerben. Diese Studien konnten massive neuronale VerĂ€nderungen im visuellen System dokumentieren, die wiederum fĂŒr die sprachliche und semantische Verarbeitung verwendet werden. Die vorliegende Dissertation untersucht mittels eines biologischen neuronalen Netzwerkes jene kognitiven Prozesse, die es Menschen ermöglichen, linguistische Bedeutungen in der tĂ€glichen Kommunikation zu erfassen, zu speichern und zu verarbeiten. Sie schlĂ€gt Antworten auf die folgenden neurowissenschaftlich heiß diskutierten Fragen vor: Warum sind sowohl semantische Hubs als auch modalitĂ€tsspezifische Regionen relevant fĂŒr die sprachliche und semantische Informationsverarbeitung im Gehirn? Welche biologischen Prinzipien sind von entscheidender Bedeutung fĂŒr die Entstehung von Semantik auf mikrostruktureller neuronaler Ebene? Und Wie ist das semantische System unter benachteiligten Bedingungen reprĂ€sentiert? ZunĂ€chst wurde ein neuronales Netzwerkmodell implementiert, das die anatomischen und physiologischen Merkmale des menschlichen Kortex prĂ€zise widerspiegelt. Auf der Mikroebene besteht das Netzwerkmodel aus 15.000 kĂŒnstlichen Neuronen, auf der Großebene aus 12 Arealen der Frontal-, Temporal- und Okzipitallappen, die fĂŒr die sprachliche und semantische Verarbeitung relevant sind. Die Verbindungsstruktur zwischen den verschiedenen kortikalen Arealen wurde rein auf Grundlage von neuroanatomischen Befunden implementiert. Zwei Modelle wurden verwendet, die jeweils die gleichen kortikalen Regionen simulierten, allerdings in verschiedenen Varianten: Das erste Modell ging von einer einfachen KonnektivitĂ€tsstruktur mit einem Mean-field Ansatz (graded-response neurons) aus, wĂ€hrend das zweite einen vollstĂ€ndig verbundenen Aufbau mit adaptionsbasierten Spiking-Zellen (Aktionspotential) verwendete. Anschließend dienten die neuronalen Netzwerke dazu, den Lernprozess der semantischen Verlinkung zwischen Wortformen, bestimmten Objektwahrnehmungen und motorischen Bewegungen des eigenen Körpers zu simulieren, sowohl in gesundem als auch in benachteiligtem Sehzustand. Als Ergebnis des Hebbschen Korrelationslernens traten spontan verteilte Neuronenverbindungen (cell assemblies) in den verschiedenen kortikalen semantischen Bereichen auf, die unterschiedliche topografische Verteilungen zeigten. Zuletzt wurde das Netzwerkmodell mit den erlernten auditorischen Mustern reaktiviert (Worterkennungsprozesse), um die zeitliche Dynamik kortikaler semantischer Aktivierung zu untersuchen und sie mit realen Gehirnantworten zu vergleichen. Die vorliegende Arbeit kam zu folgenden Ergebnissen: Die neuronale ReprĂ€sentation linguistischer Bedeutung wird im Gehirn in Form von cell assemblies dargestellt, welche ĂŒber semantische Hubs und modalitĂ€tsspezifische Regionen verteilt sind. Diese entstehen spontan durch die Interaktion einer Reihe von biologischen Mechanismen, die innerhalb spezifischer neuroanatomischer Strukturen wirken. Das Zusammenwirken dieser biologischen Prinzipien bietet zusĂ€tzlich eine ErklĂ€rung fĂŒr jene Faktoren, die fĂŒr die massiven neuronalen VerĂ€nderungen in der sprachlichen und semantischen Netzwerke bei Blindheit verantwortlich sind. Die in dieser Dissertation dokumentierten Studien sind ein erster Schritt in Richtung eines besseren VerstĂ€ndnisses der sprachlichen und semantischen Informationsverarbeitung bei sehenden und blinden Menschen, basierend auf einer Übersetzung der biologischen Prinzipien der menschlichen Kognition in prĂ€zise mathematische neuronale Netzwerke des menschlichen Gehirns

    Brain correlates of action word memory revealed by fMRI

    Get PDF
    Understanding language semantically related to actions activates the motor cortex. This activation is sensitive to semantic information such as the body part used to perform the action (e.g. arm-/leg-related action words). Additionally, motor movements of the hands/feet can have a causal effect on memory maintenance of action words, suggesting that the involvement of motor systems extends to working memory. This study examined brain correlates of verbal memory load for action-related words using event-related fMRI. Seventeen participants saw either four identical or four different words from the same category (arm-/leg-related action words) then performed a nonmatching-to-sample task. Results show that verbal memory maintenance in the high-load condition produced greater activation in left premotor and supplementary motor cortex, along with posterior-parietal areas, indicating that verbal memory circuits for action-related words include the cortical action system. Somatotopic memory load effects of arm- and leg-related words were observed, but only at more anterior cortical regions than was found in earlier studies employing passive reading tasks. These findings support a neurocomputational model of distributed action-perception circuits (APCs), according to which language understanding is manifest as full ignition of APCs, whereas working memory is realized as reverberant activity receding to multimodal prefrontal and lateral temporal areas

    Linguistic signs in action: The neuropragmatics of speech acts

    Get PDF
    What makes human communication exceptional is the ability to grasp speaker’s intentions beyond what is said verbally. How the brain processes communicative functions is one of the central concerns of the neurobiology of language and pragmatics. Linguistic-pragmatic theories define these functions as speech acts, and various pragmatic traits characterise them at the levels of propositional content, action sequence structure, related commitments and social aspects. Here I discuss recent neurocognitive studies, which have shown that the use of identical linguistic signs in conveying different communicative functions elicits distinct and ultra-rapid neural responses. Interestingly, cortical areas show differential involvement underlying various pragmatic features related to theory-of-mind, emotion and action for specific speech acts expressed with the same utterances. Drawing on a neurocognitive model, I posit that understanding speech acts involves the expectation of typical partner follow-up actions and that this predictive knowledge is immediately reflected in mind and brain

    Neurophysiological evidence for rapid processing of verbal and gestural information in understanding communicative actions

    Get PDF
    During everyday social interaction, gestures are a fundamental part of human communication. The communicative pragmatic role of hand gestures and their interaction with spoken language has been documented at the earliest stage of language development, in which two types of indexical gestures are most prominent: the pointing gesture for directing attention to objects and the give-me gesture for making requests. Here we study, in adult human participants, the neurophysiological signatures of gestural-linguistic acts of communicating the pragmatic intentions of naming and requesting by simultaneously presenting written words and gestures. Already at ~150 ms, brain responses diverged between naming and request actions expressed by word-gesture combination, whereas the same gestures presented in isolation elicited their earliest neurophysiological dissociations significantly later (at ~210 ms). There was an early enhancement of request-evoked brain activity as compared with naming, which was due to sources in the frontocentral cortex, consistent with access to action knowledge in request understanding. In addition, an enhanced N400-like response indicated late semantic integration of gesture-language interaction. The present study demonstrates that word-gesture combinations used to express communicative pragmatic intentions speed up the brain correlates of comprehension processes – compared with gesture-only understanding – thereby calling into question current serial linguistic models viewing pragmatic function decoding at the end of a language comprehension cascade. Instead, information about the social-interactive role of communicative acts is processed instantaneously

    Breakdown of category-specific word representations in a brain-constrained neurocomputational model of semantic dementia

    Get PDF
    The neurobiological nature of semantic knowledge, i.e., the encoding and storage of conceptual information in the human brain, remains a poorly understood and hotly debated subject. Clinical data on semantic deficits and neuroimaging evidence from healthy individuals have suggested multiple cortical regions to be involved in the processing of meaning. These include semantic hubs (most notably, anterior temporal lobe, ATL) that take part in semantic processing in general as well as sensorimotor areas that process specific aspects/categories according to their modality. Biologically inspired neurocomputational models can help elucidate the exact roles of these regions in the functioning of the semantic system and, importantly, in its breakdown in neurological deficits. We used a neuroanatomically constrained computational model of frontotemporal cortices implicated in word acquisition and processing, and adapted it to simulate and explain the effects of semantic dementia (SD) on word processing abilities. SD is a devastating, yet insufficiently understood progressive neurodegenerative disease, characterised by semantic knowledge deterioration that is hypothesised to be specifically related to neural damage in the ATL. The behaviour of our brain-based model is in full accordance with clinical data—namely, word comprehension performance decreases as SD lesions in ATL progress, whereas word repetition abilities remain less affected. Furthermore, our model makes predictions about lesion- and category-specific effects of SD: our simulation results indicate that word processing should be more impaired for object- than for action-related words, and that degradation of white matter should produce more severe consequences than the same proportion of grey matter decay. In sum, the present results provide a neuromechanistic explanatory account of cortical-level language impairments observed during the onset and progress of semantic dementia

    Visual cortex recruitment during language processing in blind individuals is explained by Hebbian learning

    Get PDF
    In blind people, the visual cortex takes on higher cognitive functions, including language. Why this functional organisation mechanistically emerges at the neuronal circuit level is still unclear. Here, we use a biologically constrained network model implementing features of anatomical structure, neurophysiological function and connectivity of fronto-temporal-occipital areas to simulate word-meaning acquisition in visually deprived and undeprived brains. We observed that, only under visual deprivation, distributed word-related neural circuits ‘grew into’ the deprived visual areas, which therefore adopted a linguistic-semantic role. Three factors are crucial for explaining this deprivation-related growth: changes in the network’s activity balance brought about by the absence of uncorrelated sensory input, the connectivity structure of the network, and Hebbian correlation learning. In addition, the blind model revealed long-lasting spiking neural activity compared to the sighted model during word recognition, which is a neural correlate of enhanced verbal working memory. The present neurocomputational model offers a neurobiological account for neural changes followed by sensory deprivation, thus closing the gap between cellular-level mechanisms, system-level linguistic and semantic function

    Biological constraints on neural network models of cognitive function

    Get PDF
    Neural network models are potential tools for improving our understanding of complex brain functions. To address this goal, these models need to be neurobiologically realistic. However, although neural networks have advanced dramatically in recent years and even achieve human-like performance on complex perceptual and cognitive tasks, their similarity to aspects of brain anatomy and physiology is imperfect. Here, we discuss different types of neural models, including localist, auto-associative and hetero-associative, deep and whole-brain networks, and identify aspects under which their biological plausibility can be improved. These aspects range from the choice of model neurons and of mechanisms of synaptic plasticity and learning, to implementation of inhibition and control, along with neuroanatomical properties including area structure and local and long-range connectivity. We highlight recent advances in developing biologically grounded cognitive theories and in mechanistically explaining, based on these brain-constrained neural models, hitherto unaddressed issues regarding the nature, localization and ontogenetic and phylogenetic development of higher brain functions. In closing, we point to possible future clinical applications of brain-constrained modelling

    Neurobiological mechanisms for language, symbols and concepts: Clues from brain-constrained deep neural networks

    Get PDF
    Neural networks are successfully used to imitate and model cognitive processes. However, to provide clues about the neurobiological mechanisms enabling human cognition, these models need to mimic the structure and function of real brains. Brain-constrained networks differ from classic neural networks by implementing brain similarities at different scales, ranging from the micro- and mesoscopic levels of neuronal function, local neuronal links and circuit interaction to large-scale anatomical structure and between-area connectivity. This review shows how brain-constrained neural networks can be applied to study in silico the formation of mechanisms for symbol and concept processing and to work towards neurobiological explanations of specifically human cognitive abilities. These include verbal working memory and learning of large vocabularies of symbols, semantic binding carried by specific areas of cortex, attention focusing and modulation driven by symbol type, and the acquisition of concrete and abstract concepts partly influenced by symbols. Neuronal assembly activity in the networks is analyzed to deliver putative mechanistic correlates of higher cognitive processes and to develop candidate explanations founded in established neurobiological principles

    A Review of Findings from Neuroscience and Cognitive Psychology as Possible Inspiration for the Path to Artificial General Intelligence

    Full text link
    This review aims to contribute to the quest for artificial general intelligence by examining neuroscience and cognitive psychology methods for potential inspiration. Despite the impressive advancements achieved by deep learning models in various domains, they still have shortcomings in abstract reasoning and causal understanding. Such capabilities should be ultimately integrated into artificial intelligence systems in order to surpass data-driven limitations and support decision making in a way more similar to human intelligence. This work is a vertical review that attempts a wide-ranging exploration of brain function, spanning from lower-level biological neurons, spiking neural networks, and neuronal ensembles to higher-level concepts such as brain anatomy, vector symbolic architectures, cognitive and categorization models, and cognitive architectures. The hope is that these concepts may offer insights for solutions in artificial general intelligence.Comment: 143 pages, 49 figures, 244 reference
    corecore