212 research outputs found

    Towards a model of the emergence of action space maps in the motor cortex

    Get PDF
    Self-organising maps can recreate many of the essential features of the known functional organisation of primary cortical areas in the mammalian brain. According to such models, cortical maps represent the spatial-temporal structure of sensory and/or motor input patterns registered during the early development of an animal, and this structure is determined by interactions between the neural control architecture, the body morphology, and the environmental context in which the animal develops. We present a minimal model of pseudo-physical interactions between an animat body and its environment, which includes each of these elements, and show how cortical map self-organisation is affected by manipulations to each element in turn. We find that maps robustly self-organise to reveal a homuncular organisation, where nearby body parts tend to be represented by adjacent neurons, but suggest that a homunculus caricature of these maps masks the true organisation as one that remaps from sensory coordinates into `action spaces' for controlling movements of the body to obtain environmental reward. The results motivate a reappraisal of the classic motor cortex homunculus, and demonstrate the utility of an animat modelling approach for investigating the essential constraints that affect cortical map self-organisation

    The cognitive neuroscience of visual working memory

    Get PDF
    Visual working memory allows us to temporarily maintain and manipulate visual information in order to solve a task. The study of the brain mechanisms underlying this function began more than half a century ago, with Scoville and Milner’s (1957) seminal discoveries with amnesic patients. This timely collection of papers brings together diverse perspectives on the cognitive neuroscience of visual working memory from multiple fields that have traditionally been fairly disjointed: human neuroimaging, electrophysiological, behavioural and animal lesion studies, investigating both the developing and the adult brain

    An evaluation of how connectopic mapping reveals visual field maps in V1

    Get PDF
    Functional gradients, in which response properties change gradually across the cortical surface, have been proposed as a key organising principle of the brain. However, the presence of these gradients remains undetermined in many brain regions. Resting-state neuroimaging studies have suggested these gradients can be reconstructed from patterns of functional connectivity. Here we investigate the accuracy of these reconstructions and establish whether it is connectivity or the functional properties within a region that determine these "connectopic maps". Different manifold learning techniques were used to recover visual field maps while participants were at rest or engaged in natural viewing. We benchmarked these reconstructions against maps measured by traditional visual field mapping. We report an initial exploratory experiment of a publicly available naturalistic imaging dataset, followed by a preregistered replication using larger resting-state and naturalistic imaging datasets from the Human Connectome Project. Connectopic mapping accurately predicted visual field maps in primary visual cortex, with better predictions for eccentricity than polar angle maps. Non-linear manifold learning methods outperformed simpler linear embeddings. We also found more accurate predictions during natural viewing compared to resting-state. Varying the source of the connectivity estimates had minimal impact on the connectopic maps, suggesting the key factor is the functional topography within a brain region. The application of these standardised methods for connectopic mapping will allow the discovery of functional gradients across the brain. PROTOCOL REGISTRATION: The stage 1 protocol for this Registered Report was accepted in principle on 19 April 2022. The protocol, as accepted by the journal, can be found at https://doi.org/10.6084/m9.figshare.19771717

    Properties of Visual Field Maps in Health and Disease

    Get PDF
    The visual world that surrounds us is represented in and processed by multiple topographically organised maps in the human brain. The organising principle underlying these retinotopic maps is also apparent across other sensory modalities and appears highly conserved across species. Moreover, the template for these visual maps is laid down during development, without the need for visual experience. This thesis binds and summarises seven publications describing work to characterise the functional properties of visual maps in the human brain. Initially, we describe TMS and fMRI measurements designed to probe the functional specificity of two spatially distinct but spatially adjacent maps, LO-1 and LO-2. Concurrently I developed software (visualisation tools) for precise dissection of these areas and to more broadly facilitate the visualisation of neuroimaging data. Our experiment revealed a double dissociation in the functional specificity of these areas, with preferential processing of orientation and shape information by LO-1 and LO-2, respectively. We then used fMRI to examine the effect of spatial attention on the responses measured from visual field maps. We showed that attention modulated visual responses by both enhancing attended locations and suppressing unattended locations; these effects were evident in the maps of early visual cortex and subcortical structures including the lateral geniculate and pulvinar nuclei. Finally, we examined the properties of visual field maps in patients with retinal lesions. Although maps can be abnormally organised with certain congenital visual deficits, we asked whether normally developed maps were able to reorganise when input to them is lost later in life, specifically due to central retinal lesions. Our measurements showed no evidence of reorganisation in the maps of patients with macular degeneration: the extent of activity measured in these maps was both highly predictable based on individual retinal lesions and could be reliably simulated in normally sighted individuals

    Artificial ontogenesis: a connectionist model of development

    Get PDF
    This thesis suggests that ontogenetic adaptive processes are important for generating intelligent beha- viour. It is thus proposed that such processes, as they occur in nature, need to be modelled and that such a model could be used for generating artificial intelligence, and specifically robotic intelligence. Hence, this thesis focuses on how mechanisms of intelligence are specified.A major problem in robotics is the need to predefine the behaviour to be followed by the robot. This makes design intractable for all but the simplest tasks and results in controllers that are specific to that particular task and are brittle when faced with unforeseen circumstances. These problems can be resolved by providing the robot with the ability to adapt the rules it follows and to autonomously create new rules for controlling behaviour. This solution thus depends on the predefinition of how rules to control behaviour are to be learnt rather than the predefinition of rules for behaviour themselves.Learning new rules for behaviour occurs during the developmental process in biology. Changes in the structure of the cerebral 'cortex underly behavioural and cognitive development throughout infancy and beyond. The uniformity of the neocortex suggests that there is significant computational uniformity across the cortex resulting from uniform mechanisms of development, and holds out the possibility of a general model of development. Development is an interactive process between genetic predefinition and environmental influences. This interactive process is constructive: qualitatively new behaviours are learnt by using simple abilities as a basis for learning more complex ones. The progressive increase in competence, provided by development, may be essential to make tractable the process of acquiring higher -level abilities.While simple behaviours can be triggered by direct sensory cues, more complex behaviours require the use of more abstract representations. There is thus a need to find representations at the correct level of abstraction appropriate to controlling each ability. In addition, finding the correct level of abstrac- tion makes tractable the task of associating sensory representations with motor actions. Hence, finding appropriate representations is important both for learning behaviours and for controlling behaviours. Representations can be found by recording regularities in the world or by discovering re- occurring pat- terns through repeated sensory -motor interactions. By recording regularities within the representations thus formed, more abstract representations can be found. Simple, non -abstract, representations thus provide the basis for learning more complex, abstract, representations.A modular neural network architecture is presented as a basis for a model of development. The pat- tern of activity of the neurons in an individual network constitutes a representation of the input to that network. This representation is formed through a novel, unsupervised, learning algorithm which adjusts the synaptic weights to improve the representation of the input data. Representations are formed by neurons learning to respond to correlated sets of inputs. Neurons thus became feature detectors or pat- tern recognisers. Because the nodes respond to patterns of inputs they encode more abstract features of the input than are explicitly encoded in the input data itself. In this way simple representations provide the basis for learning more complex representations. The algorithm allows both more abstract represent- ations to be formed by associating correlated, coincident, features together, and invariant representations to be formed by associating correlated, sequential, features together.The algorithm robustly learns accurate and stable representations, in a format most appropriate to the structure of the input data received: it can represent both single and multiple input features in both the discrete and continuous domains, using either topologically or non -topologically organised nodes. The output of one neural network is used to provide inputs for other networks. The robustness of the algorithm enables each neural network to be implemented using an identical algorithm. This allows a modular `assembly' of neural networks to be used for learning more complex abilities: the output activations of a network can be used as the input to other networks which can then find representations of more abstract information within the same input data; and, by defining the output activations of neurons in certain networks to have behavioural consequences it is possible to learn sensory -motor associations, to enable sensory representations to be used to control behaviour

    Schema and value: Characterizing the role of the rostral and ventral medial prefrontal cortex in episodic future thinking

    Get PDF
    As humans we are not stuck in an everlasting present. Instead, we can project ourselves into both our personal past and future. Remembering the past and simulating the future are strongly interrelated processes. They are both supported by largely the same brain regions including the rostral and ventral medial prefrontal cortex (mPFC) but also the hippocampus, the posterior cingulate cortex (PCC), as well as other regions in the parietal and temporal cortices. Interestingly, this core network for episodic simulation and episodic memory partially overlaps with a brain network for evaluation and value-based decision making. This is particularly the case for the mPFC. This part of the brain has been associated both with a large number of different cognitive functions ranging from the representation of memory schemas and self-referential processing to the representation of value and affect. As a consequence, a unifying account of mPFC functioning has remained elusive. The present thesis investigates the unique contribution of the mPFC to episodic simulation by highlighting its role in the representation of memory schemas and value. In a first functional MRI and pre-registered behavioral replication study, we demonstrate that the mPFC encodes representations of known people as well as of known locations from participants’ everyday life. We demonstrate that merely imagined encounters with liked vs. disliked people at these locations can change our attitude toward the locations. The magnitude of this simulation-induced attitude change was predicted by activation in the mPFC during the simulations. Specifically, locations simulated with liked people exhibited significantly larger increases in liking than those simulated with disliked people. In a second behavioral study, we examined the mechanisms of simulation-based learning more closely. To this end, participants also simulated encounters with neutral people at neutral locations. Using repeated behavioral assessments of participants’ memory representations, we reveal that simulations cause an integration of memory representations for jointly simulated people and locations. Moreover, compared to the neutral baseline condition we demonstrate a transfer of positive valence from liked and of negative valence from disliked people to their paired locations. We also provide evidence that simulations induce an affective experience that aligns with the valence of the person and that this experience can account for the observed attitude change toward the location. In a final fMRI study, we examine the structure of memory representations encoded in the mPFC. Specifically, we provide evidence for the hypothesis that the mPFC encodes schematic representations of our social and physical environment. We demonstrate that representations of individual exemplars of these environments (i.e., individual people and locations) are closely intertwined with a representation of their value. In sum, our findings show that we can learn from imagined experience much as we learn from actual past experience and that the mPFC plays a key role in simulation-based learning. The mPFC encodes information about our environment in value-weighted schematic representations. These representations can account for the overlap of mnemonic and evaluative functions in the mPFC and might play a key role in simulation-based learning. Our results are in line with a view that our memories of the past serve us in ways that are oriented toward the future. Our ability to simulate potential scenarios allows us to anticipate the future consequences of our choices and thereby fosters farsighted decision making. Thus, our findings help to better characterize the functional role of the mPFC in episodic future simulation and valuation

    Bridging generative models and Convolutional Neural Networks for domain-agnostic segmentation of brain MRI

    Get PDF
    Segmentation of brain MRI scans is paramount in neuroimaging, as it is a prerequisite for many subsequent analyses. Although manual segmentation is considered the gold standard, it suffers from severe reproducibility issues, and is extremely tedious, which limits its application to large datasets. Therefore, there is a clear need for automated tools that enable fast and accurate segmentation of brain MRI scans. Recent methods rely on convolutional neural networks (CNNs). While CNNs obtain accurate results on their training domain, they are highly sensitive to changes in resolution and MRI contrast. Although data augmentation and domain adaptation techniques can increase the generalisability of CNNs, these methods still need to be retrained for every new domain, which requires costly labelling of images. Here, we present a learning strategy to make CNNs agnostic to MRI contrast, resolution, and numerous artefacts. Specifically, we train a network with synthetic data sampled from a generative model conditioned on segmentations. Crucially, we adopt a domain randomisation approach where all generation parameters are drawn for each example from uniform priors. As a result, the network is forced to learn domain-agnostic features, and can segment real test scans without retraining. The proposed method almost achieves the accuracy of supervised CNNs on their training domain, and substantially outperforms state-of-the-art domain adaptation methods. Finally, based on this learning strategy, we present a segmentation suite for robust analysis of heterogeneous clinical scans. Overall, our approach unlocks the development of morphometry on millions of clinical scans, which ultimately has the potential to improve the diagnosis and characterisation of neurological disorders

    Functional coupling between CA3 and laterobasal amygdala supports schema dependent memory formation

    Get PDF
    The medial temporal lobe drives semantic congruence dependent memory formation. However, the exact roles of hippocampal subfields and surrounding brain regions remain unclear. Here, we used an established paradigm and high-resolution functional magnetic resonance imaging of the medial temporal lobe together with cytoarchitectonic probability estimates in healthy humans. Behaviorally, robust congruence effects emerged in young and older adults, indicating that schema dependent learning is unimpaired during healthy aging. Within the medial temporal lobe, semantic congruence was associated with hemodynamic activity in the subiculum, CA1, CA3 and dentate gyrus, as well as the entorhinal cortex and laterobasal amygdala. Importantly, a subsequent memory analysis showed increased activity for later remembered vs. later forgotten congruent items specifically within CA3, and this subfield showed enhanced functional connectivity to the laterobasal amygdala. As such, our findings extend current models on schema dependent learning by pinpointing the functional properties of subregions within the medial temporal lobe

    Developing and Validating Open Source Tools for Advanced Neuroimaging Research

    Get PDF
    Almost all scientific research relies on software. This is particularly true for research that uses neuroimaging technologies, such as functional magnetic resonance imaging (fMRI). These technologies generate massive amounts of data per participant, which must be processed and analyzed using specialized software. A large portion of these tools are developed by teams of researchers, rather than trained software developers. In this kind of ecosystem, where the majority of software creators are scientists, rather than trained programmers, it becomes more important than ever to rely on community-based development, which may explain why most of this software is open source. It is in the development of this kind of research-oriented, open source software that I have focused much of my graduate training, as is reflected in this dissertation. One software package I have helped to develop and maintain is tedana, a Python library for denoising multi-echo fMRI data. In chapter 2, I describe this library in a short, published software paper. Another library I maintain as the primary developer is NiMARE, a Python library for performing neuroimaging meta-analyses and derivative analyses, such as automated annotation and functional decoding. In chapter 3, I present NiMARE in a hybrid software paper with embedded tutorial code exhibiting the functionality of the library. This paper is currently hosted as a Jupyter book that combines narrative content and code snippets that can be executed online. In addition to research software development, I have focused my graduate work on performing reproducible, open fMRI research. To that end, chapter 4 is a repli- cation and extension of a recent paper on multi-echo fMRI denoising methods Power et al. (2018a). This replication was organized as a registered report, in which the introduction and methods were submitted for peer review before the analyses were performed. Finally, chapter 5 is a conclusion to the dissertation, in which I reflect on the work I have done and the skills I have developed throughout my training

    Eyetracking metrics reveal impaired spatial anticipation in behavioural variant frontotemporal dementia.

    Get PDF
    Eyetracking technology has had limited application in the dementia field to date, with most studies attempting to discriminate syndrome subgroups on the basis of basic oculomotor functions rather than higher-order cognitive abilities. Eyetracking-based tasks may also offer opportunities to reduce or ameliorate problems associated with standard paper-and-pencil cognitive tests such as the complexity and linguistic demands of verbal test instructions, and the problems of tiredness and attention associated with lengthy tasks that generate few data points at a slow rate. In the present paper we adapted the Brixton spatial anticipation test to a computerized instruction-less version where oculomotor metrics, rather than overt verbal responses, were taken into account as indicators of high level cognitive functions. Twelve bvFTD (in whom spatial anticipation deficits were expected), six SD patients (in whom deficits were predicted to be less frequent) and 38 healthy controls were presented with a 10 × 7 matrix of white circles. During each trial (N = 24) a black dot moved across seven positions on the screen, following 12 different patterns. Participants' eye movements were recorded. Frequentist statistical analysis of standard eye movement metrics were complemented by a Bayesian machine learning (ML) approach in which raw eyetracking time series datasets were examined to explore the ability to discriminate diagnostic group performance not only on the overall performance but also on individual trials. The original pen and paper Brixton test identified a spatial anticipation deficit in 7/12 (58%) of bvFTD and in 2/6 (33%) of SD patients. The eyetracking frequentist approach reported the deficit in 11/12 (92%) of bvFTD and in none (0%) of the SD patients. The machine learning approach had the main advantage of identifying significant differences from controls in 24/24 individual trials for bvFTD patients and in only 12/24 for SD patients. Results indicate that the fine grained rich datasets obtained from eyetracking metrics can inform us about high level cognitive functions in dementia, such as spatial anticipation. The ML approach can help identify conditions where subtle deficits are present and, potentially, contribute to test optimisation and the reduction of testing times. The absence of instructions also favoured a better distinction between different clinical groups of patients and can help provide valuable disease-specific markers
    • …
    corecore