16 research outputs found

    A Model of the Network Architecture of the Brain that Supports Natural Language Processing

    Get PDF
    For centuries, neuroscience has proposed models of the neurobiology of language processing that are static and localised to few temporal and inferior frontal regions. Although existing models have offered some insight into the processes underlying lower-level language features, they have largely overlooked how language operates in the real world. Here, we aimed at investigating the network organisation of the brain and how it supports language processing in a naturalistic setting. We hypothesised that the brain is organised in a multiple core-periphery and dynamic modular architecture, with canonical language regions forming high-connectivity hubs. Moreover, we predicted that language processing would be distributed to much of the rest of the brain, allowing it to perform more complex tasks and to share information with other cognitive domains. To test these hypotheses, we collected the Naturalistic Neuroimaging Database of people watching full length movies during functional magnetic resonance imaging. We computed network algorithms to capture the voxel-wise architecture of the brain in individual participants and inspected variations in activity distribution over different stimuli and over more complex language features. Our results confirmed the hypothesis that the brain is organised in a flexible multiple core-periphery architecture with large dynamic communities. Here, language processing was distributed to much of the rest of the brain, together forming multiple communities. Canonical language regions constituted hubs, explaining why they consistently appear in various other neurobiology of language models. Moreover, language processing was supported by other regions such as visual cortex and episodic memory regions, when processing more complex context-specific language features. Overall, our flexible and distributed model of language comprehension and the brain points to additional brain regions and pathways that could be exploited for novel and more individualised therapies for patients suffering from speech impairments

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Decoding Task-Based fMRI Data Using Graph Neural Networks, Considering Individual Differences

    Get PDF
    Functional magnetic resonance imaging (fMRI) is a non-invasive technology that provides high spatial resolution in determining the human brain\u27s responses and measures regional brain activity through metabolic changes in blood oxygen consumption associated with neural activity. Task fMRI provides an opportunity to analyze the working mechanisms of the human brain during specific task performance. Over the past several years, a variety of computational methods have been proposed to decode task fMRI data that can identify brain regions associated with different task stimulations. Despite the advances made by these methods, several limitations exist due to graph representations and graph embeddings transferred from task fMRI signals. In the present study, we proposed an end-to-end graph convolutional network by combining the convolutional neural network with graph representation, with three convolutional layers to classify task fMRI data from the Human Connectome Project (302 participants, 22–35 years of age). One goal of this dissertation was to improve classification performance. We applied four of the most widely used node embedding algorithms—NetMF, RandNE, Node2Vec, and Walklets—to automatically extract the structural properties of the nodes in the brain functional graph, then evaluated the performance of the classification model. The empirical results indicated that the proposed GCN framework accurately identified the brain\u27s state in task fMRI data and achieved comparable macro F1 scores of 0.978 and 0.976 with the NetMF and RandNE embedding methods, respectively. Another goal of the dissertation was to assess the effects of individual differences (i.e., gender and fluid intelligence) on classification performance. We tested the proposed GCN framework on sub-datasets divided according to gender and fluid intelligence. Experimental results indicated significant differences in the classification predictions of gender, but not high/low fluid intelligence fMRI data. Our experiments yielded promising results and demonstrated the superior ability of our GCN in modeling task fMRI data

    Grounding semantic cognition using computational modelling and network analysis

    Get PDF
    The overarching objective of this thesis is to further the field of grounded semantics using a range of computational and empirical studies. Over the past thirty years, there have been many algorithmic advances in the modelling of semantic cognition. A commonality across these cognitive models is a reliance on hand-engineering “toy-models”. Despite incorporating newer techniques (e.g. Long short-term memory), the model inputs remain unchanged. We argue that the inputs to these traditional semantic models have little resemblance with real human experiences. In this dissertation, we ground our neural network models by training them with real-world visual scenes using naturalistic photographs. Our approach is an alternative to both hand-coded features and embodied raw sensorimotor signals. We conceptually replicate the mutually reinforcing nature of hybrid (feature-based and grounded) representations using silhouettes of concrete concepts as model inputs. We next gradually develop a novel grounded cognitive semantic representation which we call scene2vec, starting with object co-occurrences and then adding emotions and language-based tags. Limitations of our scene-based representation are identified for more abstract concepts (e.g. freedom). We further present a large-scale human semantics study, which reveals small-world semantic network topologies are context-dependent and that scenes are the most dominant cognitive dimension. This finding leads us to conclude that there is no meaning without context. Lastly, scene2vec shows promising human-like context-sensitive stereotypes (e.g. gender role bias), and we explore how such stereotypes are reduced by targeted debiasing. In conclusion, this thesis provides support for a novel computational viewpoint on investigating meaning - scene-based grounded semantics. Future research scaling scene-based semantic models to human-levels through virtual grounding has the potential to unearth new insights into the human mind and concurrently lead to advancements in artificial general intelligence by enabling robots, embodied or otherwise, to acquire and represent meaning directly from the environment

    Cue combination of colour and luminance in edge detection

    Get PDF
    Much is known about visual processing of chromatic and luminance information. However, less is known about how these two signals are combined. This thesis has three aims to investigate how colour and luminance are combined in edge detection. 1) To determine whether presenting colour and luminance information together improves performance in tasks such as edge localisation and blur detection. 2) To investigate how the visual system resolves conflicts between colour and luminance edge information. 3) To explore whether colour and luminance edge information is always combined in the same way. It is well known that the perception of chromatic blur can be constrained by sharp luminance information in natural scenes. The first set of experiments (Chapter 3) quantifies this effect and demonstrates that it cannot be explained by poorer acuity in processing chromatic information, higher contrast of luminance information or differences in the statistical structure of colour and luminance information in natural scenes. It is therefore proposed that there is a neural mechanism that actively promotes luminance information. Chapter 4 and Experiments 5.1 and 5.3 aimed to investigate whether the presence of both chromatic and luminance information improves edge localisation performance. Participant performance in a Vernier acuity (alignment) task was compared to predictions from three models; ‘winner takes all’, unweighted averaging and maximum likelihood estimation (a form of weighted averaging). Despite several attempts to differentiate the models we failed to increase the differences in model predictions sufficiently and it was not possible to determine whether edge localisation was enhanced by the presence of both cues. In Experiment 5.4 we investigated how edges are localised when colour and luminance cues conflict, using the method of adjustment. Maximum likelihood estimation was used to make predictions based on measurements of each cue in isolation. These predictions were then compared to observed data. It was found that, whilst maximum likelihood estimation captured the pattern of the data, it consistently over-estimated the weight of the luminance component. It is suggested that chromatic information may be weighted more heavily than predicted as it is more useful for detecting object boundaries in natural scenes. In Chapter 6 a novel approach, perturbation discrimination, was used to investigate how the spatial arrangement of chromatic and luminance cues, and the type of chromatic and luminance information, can affect cue combination. Perturbation discrimination requires participants to select the grating stimulus that contains spatial perturbation. If one cue dominated over the other it was expected that this would be reflected by masking and increased perturbation detection thresholds. We compared perturbation thresholds for chromatic and luminance defined line and square-wave gratings in isolation and when presented with a mask of the other channel and other grating type. For example, the perturbation threshold for a luminance line target alone was compared to the threshold for a luminance line target presented with a chromatic square-wave target. The introduction of line masks caused masking for both combinations. Introduction of an achromatic square-wave mask had no effect on perturbation thresholds for chromatic line targets. However, the introduction of a chromatic square-wave mask to luminance line targets improved perturbation discrimination performance. This suggests that the perceived location of the chromatic edges is determined by the location of the luminance lines. Finally, in Chapter 7, we investigated whether chromatic blur is constrained by luminance information in bipartite edges. Earlier in the thesis we demonstrated that luminance information constrains chromatic blur in natural scenes, but also that chromatic information has more influence than expected when colour and luminance edges conflict. This difference may be due to differences in the stimuli or due to differences in the task. The luminance masking effect found using natural scenes was replicated using bipartite edges. Therefore, the finding that luminance constrains chromatic blur is not limited to natural scene stimuli. This suggests that colour and luminance are combined differently for blur discrimination tasks and edge localisation tasks. Overall we can see that luminance often dominates in edge perception tasks. For blur discrimination this seems to be because the mechanisms differ. For edge localisation it might be simply that luminance cues are often higher contrast and, when this is equated, chromatic cues are actually a good indicator of edge location

    Cue combination of colour and luminance in edge detection

    Get PDF
    Much is known about visual processing of chromatic and luminance information. However, less is known about how these two signals are combined. This thesis has three aims to investigate how colour and luminance are combined in edge detection. 1) To determine whether presenting colour and luminance information together improves performance in tasks such as edge localisation and blur detection. 2) To investigate how the visual system resolves conflicts between colour and luminance edge information. 3) To explore whether colour and luminance edge information is always combined in the same way. It is well known that the perception of chromatic blur can be constrained by sharp luminance information in natural scenes. The first set of experiments (Chapter 3) quantifies this effect and demonstrates that it cannot be explained by poorer acuity in processing chromatic information, higher contrast of luminance information or differences in the statistical structure of colour and luminance information in natural scenes. It is therefore proposed that there is a neural mechanism that actively promotes luminance information. Chapter 4 and Experiments 5.1 and 5.3 aimed to investigate whether the presence of both chromatic and luminance information improves edge localisation performance. Participant performance in a Vernier acuity (alignment) task was compared to predictions from three models; ‘winner takes all’, unweighted averaging and maximum likelihood estimation (a form of weighted averaging). Despite several attempts to differentiate the models we failed to increase the differences in model predictions sufficiently and it was not possible to determine whether edge localisation was enhanced by the presence of both cues. In Experiment 5.4 we investigated how edges are localised when colour and luminance cues conflict, using the method of adjustment. Maximum likelihood estimation was used to make predictions based on measurements of each cue in isolation. These predictions were then compared to observed data. It was found that, whilst maximum likelihood estimation captured the pattern of the data, it consistently over-estimated the weight of the luminance component. It is suggested that chromatic information may be weighted more heavily than predicted as it is more useful for detecting object boundaries in natural scenes. In Chapter 6 a novel approach, perturbation discrimination, was used to investigate how the spatial arrangement of chromatic and luminance cues, and the type of chromatic and luminance information, can affect cue combination. Perturbation discrimination requires participants to select the grating stimulus that contains spatial perturbation. If one cue dominated over the other it was expected that this would be reflected by masking and increased perturbation detection thresholds. We compared perturbation thresholds for chromatic and luminance defined line and square-wave gratings in isolation and when presented with a mask of the other channel and other grating type. For example, the perturbation threshold for a luminance line target alone was compared to the threshold for a luminance line target presented with a chromatic square-wave target. The introduction of line masks caused masking for both combinations. Introduction of an achromatic square-wave mask had no effect on perturbation thresholds for chromatic line targets. However, the introduction of a chromatic square-wave mask to luminance line targets improved perturbation discrimination performance. This suggests that the perceived location of the chromatic edges is determined by the location of the luminance lines. Finally, in Chapter 7, we investigated whether chromatic blur is constrained by luminance information in bipartite edges. Earlier in the thesis we demonstrated that luminance information constrains chromatic blur in natural scenes, but also that chromatic information has more influence than expected when colour and luminance edges conflict. This difference may be due to differences in the stimuli or due to differences in the task. The luminance masking effect found using natural scenes was replicated using bipartite edges. Therefore, the finding that luminance constrains chromatic blur is not limited to natural scene stimuli. This suggests that colour and luminance are combined differently for blur discrimination tasks and edge localisation tasks. Overall we can see that luminance often dominates in edge perception tasks. For blur discrimination this seems to be because the mechanisms differ. For edge localisation it might be simply that luminance cues are often higher contrast and, when this is equated, chromatic cues are actually a good indicator of edge location

    Brain Computations and Connectivity [2nd edition]

    Get PDF
    This is an open access title available under the terms of a CC BY-NC-ND 4.0 International licence. It is free to read on the Oxford Academic platform and offered as a free PDF download from OUP and selected open access locations. Brain Computations and Connectivity is about how the brain works. In order to understand this, it is essential to know what is computed by different brain systems; and how the computations are performed. The aim of this book is to elucidate what is computed in different brain systems; and to describe current biologically plausible computational approaches and models of how each of these brain systems computes. Understanding the brain in this way has enormous potential for understanding ourselves better in health and in disease. Potential applications of this understanding are to the treatment of the brain in disease; and to artificial intelligence which will benefit from knowledge of how the brain performs many of its extraordinarily impressive functions. This book is pioneering in taking this approach to brain function: to consider what is computed by many of our brain systems; and how it is computed, and updates by much new evidence including the connectivity of the human brain the earlier book: Rolls (2021) Brain Computations: What and How, Oxford University Press. Brain Computations and Connectivity will be of interest to all scientists interested in brain function and how the brain works, whether they are from neuroscience, or from medical sciences including neurology and psychiatry, or from the area of computational science including machine learning and artificial intelligence, or from areas such as theoretical physics

    Computational explorations of semantic cognition

    Get PDF
    Motivated by the widespread use of distributional models of semantics within the cognitive science community, we follow a computational modelling approach in order to better understand and expand the applicability of such models, as well as to test potential ways in which they can be improved and extended. We review evidence in favour of the assumption that distributional models capture important aspects of semantic cognition. We look at the models’ ability to account for behavioural data and fMRI patterns of brain activity, and investigate the structure of model-based, semantic networks. We test whether introducing affective information, obtained from a neural network model designed to predict emojis from co-occurring text, can improve the performance of linguistic and linguistic-visual models of semantics, in accounting for similarity/relatedness ratings. We find that adding visual and affective representations improves performance, especially for concrete and abstract words, respectively. We describe a processing model based on distributional semantics, in which activation spreads throughout a semantic network, as dictated by the patterns of semantic similarity between words. We show that the activation profile of the network, measured at various time points, can account for response time and accuracies in lexical and semantic decision tasks, as well as for concreteness/imageability and similarity/relatedness ratings. We evaluate the differences between concrete and abstract words, in terms of the structure of the semantic networks derived from distributional models of semantics. We examine how the structure is related to a number of factors that have been argued to differ between concrete and abstract words, namely imageability, age of acquisition, hedonic valence, contextual diversity, and semantic diversity. We use distributional models to explore factors that might be responsible for the poor linguistic performance of children suffering from Developmental Language Disorder. Based on the assumption that certain model parameters can be given a psychological interpretation, we start from “healthy” models, and generate “lesioned” models, by manipulating the parameters. This allows us to determine the importance of each factor, and their effects with respect to learning concrete vs abstract words

    29th Annual Computational Neuroscience Meeting: CNS*2020

    Get PDF
    Meeting abstracts This publication was funded by OCNS. The Supplement Editors declare that they have no competing interests. Virtual | 18-22 July 202
    corecore