185 research outputs found

    The simulation of action disorganisation in complex activities of daily living

    Get PDF
    Action selection in everyday goal-directed tasks of moderate complexity is known to be subject to breakdown following extensive frontal brain injury. A model of action selection in such tasks is presented and used to explore three hypotheses concerning the origins of action disorganisation: that it is a consequence of reduced top-down excitation within a hierarchical action schema network coupled with increased bottom-up triggering of schemas from environmental sources, that it is a more general disturbance of schema activation modelled by excessive noise in the schema network, and that it results from a general disturbance of the triggering of schemas by object representations. Results suggest that the action disorganisation syndrome is best accounted for by a general disturbance to schema activation, while altering the balance between top-down and bottom-up activation provides an account of a related disorder - utilisation behaviour. It is further suggested that ideational apraxia (which may result from lesions to left temporoparietal areas and which has similar behavioural consequences to action disorganisation syndrome on tasks of moderate complexity) is a consequence of a generalised disturbance of the triggering of schemas by object representations. Several predictions regarding differences between action disorganisation syndrome and ideational apraxia that follow from this interpretation are detailed

    Order and disorder in everyday action: the roles of contention scheduling and supervisory attention

    Get PDF
    This paper describes the contention scheduling/supervisory attentional system approach to action selection and uses this account to structure a survey of current theories of the control of action. The focus is on how such theories account for the types of error produced by some patients with frontal and/or left temporoparietal damage when attempting everyday tasks. Four issues, concerning both the theories and their accounts of everyday action breakdown, emerge: first, whether multiple control systems, each capable of controlling action in different situations, exist; second, whether different forms of damage at the neural level result in conceptually distinct disorders; third, whether semantic/conceptual knowledge of objects and actions can be dissociated from control mechanisms, and if so what computational principles govern sequential control; and fourth, whether disorders of everyday action should be attributed to a loss of semantic/conceptual knowledge, a malfunction of control, or some combination of the two

    Modelling a Fractionated System of Deductive Reasoning over Categorical Syllogisms

    Get PDF
    The study of deductive reasoning has been a major research paradigm in psychology for decades. Recent additions to this literature have focused heavily on neuropsychological evidence. Such a practice is useful for identifying regions associated with particular functions, but fails to clearly define the specific interactions and timescale of these functions. Computational modelling provides a method for creating different cognitive architectures for simulating deductive processes, and ultimately determining which architectures are capable of modelling human reasoning. This thesis details a computational model for solving categorical syllogisms utilizing a fractionated system of brain regions. Lesions are applied to formal and heuristic systems to simulate accuracy and reaction time data for bi-lateral parietal and frontotemporal patients. The model successfully combines belief-bias and other known cognitive biases with a mental models formal approach to recreate the congruency by group effect present in the human data. Implications are drawn to major theories of reasoning

    Validation of the Reading Tendency Index in school-age children: Replication with a bilingual sample

    Get PDF
    Defining deficits in reading ability may be accomplished through the analysis of a child’s reading tendencies, representing a possible paradigm shift in the conceptualization and assessment of reading disabilities. Based on this premise, Mohl and colleagues (2018) developed a quantitative paradigm to measure reading tendency in children through performance on two lexical decision tasks (LDTs) that differentially rely on decoding and sightword reading abilities. The Reading Tendency Index (RTI; Mohl et al., 2018) is calculated from the differential between drift rates on the phonologic and orthographic LDTs. Scores closer to zero represent a balanced approach whereas scores as a negative or positive value suggest the tendency to rely on phonological decoding or sightword reading strategies, respectively. It was suggested that a balanced approach promotes more proficient reading abilities; however, this original study was performed with a small, male-only sample with a significant number of children with an ADHD diagnosis. The present study provided independent examination of the RTI paradigm, including the two LDT tasks and original calculations, to validate the tasks as a measure of reading abilities in a larger, representative sample of school-aged children. The present study involved the following goals: 1) to replicate the three-group reading tendency structure based on LDT performance in a larger representative sample of school-aged children, 2) to examine the construct validity of the RTI groupings and LDT tasks as a quantitative measure of reading ability, 3) to determine whether RTI group membership can be predicted based on reading and other cognitive skills, and 4) to explore performance differences, if any, in participants enrolled in French Immersion programs. The final sample included 92 participants aged 7 to 14 years (Mage = 9.96 years) recruited from English (n = 49) and French Immersion (n = 43) schools. Results indicated the following: 1) the three-group RTI structure was replicated in the larger sample of typically-developing school-aged children; 2) Sightword Readers had poorer performance on reading fluency, reading comprehension, and spelling than Balanced Readers and Decoders, but groups did not differ otherwise; 3) only reading comprehension predicted membership for the Sightword group; and 4) French Immersion students demonstrated similar patterns of performance on the RTI and other cognitive measures as English-only students. Supplemental post-hoc analyses were performed to explore different cut-off scores and methods for determining RTI groups. Implications and limitations of the current findings as well as considerations for future studies are discussed

    A Neurobiologically Constrained Model

    Get PDF
    Understanding the meaning of words and its relationship with the outside world involves higher cognitive processes unique of the human brain. Despite many decades of research on the neural substrates of semantic processing, a consensus about the functions and components of the semantic system has not been reached among cognitive neuroscientists. This issue is mainly influenced by two sets of neurocognitive empirical findings that have shown (i) the existence of several regions acting as ’semantic hubs’, where the meaning of all types of words is processed and (ii) the presence of other cortical regions specialised for the processing of specific semantic word categories, such as animals, tools, or actions. Further evidence on semantic meaning processing comes from neuroimaging and transcranial magnetic stimulation studies in visually deprived population that acquires semantic knowledge through non-sensory modalities. These studies have documented massive neural changes in the visual system that is in turn recruited for linguistic and semantic processing. On this basis, this dissertation investigates the neurobiological mechanism that enables humans to acquire, store and processes linguistics meaning by means of a neurobiologically constrained neural network and offers an answer to the following hotly debated questions: Why both semantic hubs and modality-specific regions are involved in semantic meaning processing in the brain? Which biological principles are critical for the emergence of semantics at the microstructural neural level and how is the semantic system implemented under deprived conditions, in particular in congenitally blind people? First, a neural network model closely replicating the anatomical and physiological features of the human cortex was designed. At the micro level, the network was composed of 15,000 artificial neurons; at the large-scale level, there were 12 areas representing the frontal, temporal, and occipital lobes relevant for linguistic and semantic processing. The connectivity structure linking the different cortical areas was purely based on neuroanatomical evidence. Two models were used, each simulating the same set of cortical regions but at different level of details: one adopted a simple connectivity structure with a mean-field approach (i.e. graded-response neurons), and the other used a fully connected model with adaptation-based spiking cells. Second, the networks were used to simulate the process of learning semantic relationships between word-forms, specific object perceptions, and motor movements of the own body in deprived and undeprived visual condition. As a result of Hebbian correlated learning, distributed word-related cell assembly circuits spontaneously emerged across the different cortical semantic areas exhibiting different topographical distribution. Third, the network was reactivated with the learned auditory patterns (simulating word recognition processes) to investigate the temporal dynamics of cortical semantic activation and compare them with real brain responses. In summary, the findings of the present work demonstrate that meaningful linguistic units are represented in the brain in the form of cell assemblies that are distributed over both semantic hubs and category-specific regions spontaneously emerged through the mutual interaction of a single set of biological mechanisms acting within specific neuroanatomical structures. These biological principles acting together also offer an explanation of the mechanisms underlying massive neural changes in the visual cortex for language processing caused by blindness. The present work is a first step in better understanding the building blocks of language and semantic processing in sighted and blind populations by translating biological principles that govern human cognition into precise mathematical neural networks of the human brain.Um die Bedeutung von Wörtern und ihre Beziehung zur Außenwelt zu verstehen, mĂŒssen die kognitiven Prozesse betrachtet werden, die einzigartig fĂŒr das menschliche Gehirn sind. Trotz jahrzehntelanger Forschungen an den neuronalen Substraten der semantischen Verarbeitung im menschlichen Gehirn wurde bisher kein Konsens ĂŒber die Funktionen und Komponenten des semantischen Systems in den kognitiven Neurowissenschaftlern erreicht. Dieses Problem grĂŒndet darin, dass neurokognitive empirische Studien zumeist zu zwei Endergebnissen kamen: (i) der Existenz von mehrere Regionen, die als ‘semantische Hubs’ fungieren, in denen die Bedeutung aller Wortarten verarbeitet wird, und (ii) dem Vorhandensein weiterer kortikaler Regionen, die auf die Verarbeitung spezifischer semantischer Kategorien wie Tiere, Werkzeuge oder Aktionswörtern spezialisiert sind. Ein weiterer Beweis fĂŒr die Verarbeitung semantischer Bedeutungen lĂ€sst sich aus Bildgebungsstudien und Studien mit transkranialer Magnetstimulation an visuell benachteiligten Probanden entnehmen, die die linguistische Bedeutung nicht durch sensorische ModalitĂ€ten erwerben. Diese Studien konnten massive neuronale VerĂ€nderungen im visuellen System dokumentieren, die wiederum fĂŒr die sprachliche und semantische Verarbeitung verwendet werden. Die vorliegende Dissertation untersucht mittels eines biologischen neuronalen Netzwerkes jene kognitiven Prozesse, die es Menschen ermöglichen, linguistische Bedeutungen in der tĂ€glichen Kommunikation zu erfassen, zu speichern und zu verarbeiten. Sie schlĂ€gt Antworten auf die folgenden neurowissenschaftlich heiß diskutierten Fragen vor: Warum sind sowohl semantische Hubs als auch modalitĂ€tsspezifische Regionen relevant fĂŒr die sprachliche und semantische Informationsverarbeitung im Gehirn? Welche biologischen Prinzipien sind von entscheidender Bedeutung fĂŒr die Entstehung von Semantik auf mikrostruktureller neuronaler Ebene? Und Wie ist das semantische System unter benachteiligten Bedingungen reprĂ€sentiert? ZunĂ€chst wurde ein neuronales Netzwerkmodell implementiert, das die anatomischen und physiologischen Merkmale des menschlichen Kortex prĂ€zise widerspiegelt. Auf der Mikroebene besteht das Netzwerkmodel aus 15.000 kĂŒnstlichen Neuronen, auf der Großebene aus 12 Arealen der Frontal-, Temporal- und Okzipitallappen, die fĂŒr die sprachliche und semantische Verarbeitung relevant sind. Die Verbindungsstruktur zwischen den verschiedenen kortikalen Arealen wurde rein auf Grundlage von neuroanatomischen Befunden implementiert. Zwei Modelle wurden verwendet, die jeweils die gleichen kortikalen Regionen simulierten, allerdings in verschiedenen Varianten: Das erste Modell ging von einer einfachen KonnektivitĂ€tsstruktur mit einem Mean-field Ansatz (graded-response neurons) aus, wĂ€hrend das zweite einen vollstĂ€ndig verbundenen Aufbau mit adaptionsbasierten Spiking-Zellen (Aktionspotential) verwendete. Anschließend dienten die neuronalen Netzwerke dazu, den Lernprozess der semantischen Verlinkung zwischen Wortformen, bestimmten Objektwahrnehmungen und motorischen Bewegungen des eigenen Körpers zu simulieren, sowohl in gesundem als auch in benachteiligtem Sehzustand. Als Ergebnis des Hebbschen Korrelationslernens traten spontan verteilte Neuronenverbindungen (cell assemblies) in den verschiedenen kortikalen semantischen Bereichen auf, die unterschiedliche topografische Verteilungen zeigten. Zuletzt wurde das Netzwerkmodell mit den erlernten auditorischen Mustern reaktiviert (Worterkennungsprozesse), um die zeitliche Dynamik kortikaler semantischer Aktivierung zu untersuchen und sie mit realen Gehirnantworten zu vergleichen. Die vorliegende Arbeit kam zu folgenden Ergebnissen: Die neuronale ReprĂ€sentation linguistischer Bedeutung wird im Gehirn in Form von cell assemblies dargestellt, welche ĂŒber semantische Hubs und modalitĂ€tsspezifische Regionen verteilt sind. Diese entstehen spontan durch die Interaktion einer Reihe von biologischen Mechanismen, die innerhalb spezifischer neuroanatomischer Strukturen wirken. Das Zusammenwirken dieser biologischen Prinzipien bietet zusĂ€tzlich eine ErklĂ€rung fĂŒr jene Faktoren, die fĂŒr die massiven neuronalen VerĂ€nderungen in der sprachlichen und semantischen Netzwerke bei Blindheit verantwortlich sind. Die in dieser Dissertation dokumentierten Studien sind ein erster Schritt in Richtung eines besseren VerstĂ€ndnisses der sprachlichen und semantischen Informationsverarbeitung bei sehenden und blinden Menschen, basierend auf einer Übersetzung der biologischen Prinzipien der menschlichen Kognition in prĂ€zise mathematische neuronale Netzwerke des menschlichen Gehirns

    Representing meaning: a feature-based model of object and action words

    Get PDF
    The representation of word meaning has received substantial attention in the psycholinguistic literature over the past decades, yet the vast majority of studies have been limited to words referring to concrete objects. The aim of the present work is to provide a theoretically and neurally plausible model of lexical-semantic representations, not only for words referring to concrete objects but also for words referring to actions and events using a common set of assumptions across domains. In order to do so, features of meaning are generated by naĂŻve speakers, and used as a window into important aspects of representation. A first series of analyses test how the meanings of words of different types are reflected in features associated with different modalities of sensory-motor experience, and how featural properties may be related to patterns of impairment in language-disordered populations. The features of meaning are then used to generate a model of lexical-semantic similarity, in which these different types of words are represented within a single system, under the assumption that lexical-semantic representations serve to provide an interface between conceptual knowledge derived in part from sensory-motor experience, and other linguistic information such as syntax, phonology and orthography. Predictions generated from this model are tested in a series of behavioural experiments designed to test two main questions: whether similarity measures based on speaker- generated features can predict fine-grained semantic similarity effects, and whether the predictive quality of the model is comparable for words referring to objects and words referring to actions. The results of five behavioural experiments consistently reveal graded semantic effects as predicted by the feature-based model, of similar magnitude for objects and actions. The model's fine-grained predictive performance is also found to be superior to other word-based models of representation (Latent Semantic Analysis, and similarity measures derived from Wordnet)

    Grounding semantic cognition using computational modelling and network analysis

    Get PDF
    The overarching objective of this thesis is to further the field of grounded semantics using a range of computational and empirical studies. Over the past thirty years, there have been many algorithmic advances in the modelling of semantic cognition. A commonality across these cognitive models is a reliance on hand-engineering “toy-models”. Despite incorporating newer techniques (e.g. Long short-term memory), the model inputs remain unchanged. We argue that the inputs to these traditional semantic models have little resemblance with real human experiences. In this dissertation, we ground our neural network models by training them with real-world visual scenes using naturalistic photographs. Our approach is an alternative to both hand-coded features and embodied raw sensorimotor signals. We conceptually replicate the mutually reinforcing nature of hybrid (feature-based and grounded) representations using silhouettes of concrete concepts as model inputs. We next gradually develop a novel grounded cognitive semantic representation which we call scene2vec, starting with object co-occurrences and then adding emotions and language-based tags. Limitations of our scene-based representation are identified for more abstract concepts (e.g. freedom). We further present a large-scale human semantics study, which reveals small-world semantic network topologies are context-dependent and that scenes are the most dominant cognitive dimension. This finding leads us to conclude that there is no meaning without context. Lastly, scene2vec shows promising human-like context-sensitive stereotypes (e.g. gender role bias), and we explore how such stereotypes are reduced by targeted debiasing. In conclusion, this thesis provides support for a novel computational viewpoint on investigating meaning - scene-based grounded semantics. Future research scaling scene-based semantic models to human-levels through virtual grounding has the potential to unearth new insights into the human mind and concurrently lead to advancements in artificial general intelligence by enabling robots, embodied or otherwise, to acquire and represent meaning directly from the environment
    • 

    corecore