65 research outputs found

    The neural coding of properties shared by faces, bodies and objects

    Get PDF
    Previous studies have identified relatively separated regions of the brain that respond strongly when participants view images of either faces, bodies or objects. The aim of this thesis was to investigate how and where in the brain shared properties of faces, bodies and objects are processed. We selected three properties that are shared by faces and bodies, shared categories (sex and weight), shared identity and shared orientation (i.e. facing direction). We also investigated one property shared by faces and objects, the tendency to process a face or object as a whole rather than by its parts, which is known as holistic processing. We hypothesized that these shared properties might be encoded separately for faces, bodies and objects in the previously defined domain-specific regions, or alternatively that they might be encoded in an overlapping or shared code in those or other regions. In all of studies in this thesis, we used fMRI to record the brain activity of participants viewing images of faces and bodies or objects that showed differences in the shared properties of interest. We then investigated the neural responses these stimuli elicited in a variety of specifically localized brain regions responsive to faces, bodies or objects, as well as across the whole-brain. Our results showed evidence for a mix of overlapping coding, shared coding and domain-specific coding, depending on the particular property and the level of abstraction of its neural coding. We found we could decode face and body categories, identities and orientations from both face- and body-responsive regions showing that these properties are encoded in overlapping brain regions. We also found that non-domain specific brain regions are involved in holistic face processing. We identified shared coding of orientation and weight in the occipital cortex and shared coding of identity in the early visual cortex, right inferior occipital cortex, right parahippocampal cortex and right superior parietal cortex, demonstrating that a variety of brain regions combine face and body information into a common code. In contrast to these findings, we found evidence that high-level visual transformations may be predominantly processed in domain-specific regions, as we could most consistently decode body categories across image-size and body identity across viewpoint from body-responsive regions. In conclusion, this thesis furthers our understanding of the neural coding of face, body and object properties and gives new insights into the functional organisation of occipitotemporal cortex

    Decoding the consumer’s brain: Neural representations of consumer experience

    Get PDF
    Understanding consumer experience – what consumers think about brands, how they feel about services, whether they like certain products – is crucial to marketing practitioners. ‘Neuromarketing’, as the application of neuroscience in marketing research is called, has generated excitement with the promise of understanding consumers’ minds by probing their brains directly. Recent advances in neuroimaging analysis leverage machine learning and pattern classification techniques to uncover patterns from neuroimaging data that can be associated with thoughts and feelings. In this dissertation, I measure brain responses of consumers by functional magnetic resonance imaging (fMRI) in order to ‘decode’ their mind. In three different studies, I have demonstrated how different aspects of consumer experience can be studied with fMRI recordings. First, I study how consumers think about brand image by comparing their brain responses during passive viewing of visual templates (photos depicting various social scenarios) to those during active visualizing of a brand’s image. Second, I use brain responses during viewing of affective pictures to decode emotional responses during watching of movie-trailers. Lastly, I examine whether marketing videos that evoke s

    Revealing neural representations of movements and skill using multi voxel pattern analysis

    Get PDF
    One of the main functions of the human brain is to process information, such that we can interact efficiently with our environment by moving our body. Neuronal representations of information pertaining to the movement is fundamental for control. Using functional magnetic resonance imaging, researchers have studied brain areas that are responsible for motor control based on overall neuronal signal changes. It is assumed that the amount of overall activity indicates how much an area is involved in the control of movements. In this thesis, I start from the approach that the representation of critical variables describing the movements, rather than the overall activation, is the most relevant factor for a region to be important in the control of an action. Representations in three major fields of motor control were studied in this thesis. First, the integration of sensory and motor information was analysed via finger representations in the cerebellum and the neocortex. The findings suggest that sensory and motor representations of fingers overlap spatially in the neocortex but are interdigitated in the cerebellum, suggesting neuronal differences in how information are integrated in the brain structures. Then, neuronal reorganisations of representations were studied during motor learning. The results showed that the neural representation of sequences becomes more distinct with training, while the overall activity does not change. Lastly, I studied effector specific and effector independent representations of sequential motor behaviours by investigating the similarity of neuronal representations for left and right hand performance. Overall, this thesis demonstrates that the study of neural representations using multivariate methods in fMRI provides a new hypothesis-driven approach to the study of human motor control and learning of movements

    The Role of Unimodal and Transmodal Cortex in Perceptually-Coupled and Decoupled Semantic Cognition: Evidence from fMRI

    Get PDF
    Semantic retrieval extends beyond the here-and-now, to draw on abstract knowledge that has been extracted across multiple experiences; for instance, we can easily bring to mind what a dog looks and sounds like, even when a dog is not present in our environment. However, a clear understanding of the neural substrates that support patterns of semantic retrieval that are not immediately driven by stimuli in the environment is lacking. This thesis sought to investigate the neural basis of semantic retrieval within unimodal and heteromodal networks, whilst manipulating the availability of information in the environment. Much of the empirical work takes inspiration from modern accounts of transmodal regions (Lambon Ralph et al. 2017; Margulies et al. 2016), which suggest the anterior temporal lobe (ATL) and default mode network (DMN) support both abstraction and perceptual decoupling. The first empirical chapter examines whether words and experiences activate common neural substrates in sensory regions and where, within the ATLs, representations are transmodal. The second empirical chapter investigates how perceptually-decoupled forms of semantic retrieval in imagination are represented across unimodal and transmodal regions. The third empirical chapter interrogates whether transmodal regions respond in a similar manner to conceptually-guided and perceptually-decoupled cognition, and whether these two factors interact. The data suggests ventrolateral ATL processes both abstract modality-invariant semantic representations (Chapter 3) and decoupled semantic processing during imagination (Chapter 4). In addition, this thesis found comparable networks recruited for both conceptual processing and perceptually-decoupled retrieval corresponding to the broader DMN (Chapter 5). Further interrogation of these sites confirmed lateral MTG and bilateral angular gyrus were pivotal in the combination of conceptual retrieval from memory. Collectively, this data suggests that brain regions situated farthest from sensory input systems in both functional and connectivity space are required for the most abstract forms of cognition

    Detecting cognitive states from the analysis of structural and functional images of the brain: two applications of Multi-Voxel Pattern Analysis on MRI and fMRI data

    Get PDF
    In recent years, the efficacy and accuracy of multivariate analysis techniques on neuroimaging data has been tested on different topics. These methods have shown the ability to decode mental states from the analysis of brain scans, for this reason it has been called “brain reading”. The predictions can be applied to general mental states, referring to stable conditions not related to a contingent task (e.g., a neurological diagnosis), or specific mental states, referring to task-related cognitive processes (e.g., the perception of a category of stimuli). According to several neuroscientists, brain reading approach can potentially be useful for applications in both clinical and forensic neuroscience in the future. In the present dissertation, two applications of the brain reading approach are presented on two relevant topics for clinical and forensic neuroscience that have not been extensively investigated with these methods. In Section A, this approach is tested on decoding different levels of Cognitive Reserve from the pattern of grey matter volume, in two MRI studies. In Section B two fMRI studies investigate the possibility of decoding real autobiographical memories from brain activity. The aim of this thesis is to contribute to the amount of studies showing the usefulness of multivariate techniques in decoding “mental states” starting from the analysis of structural and functional brain imaging data, as well as the potential uses in clinical and forensic settings

    Neural processes underpinning episodic memory

    Get PDF
    Episodic memory is the memory for our personal past experiences. Although numerous functional magnetic resonance imaging (fMRI) studies investigating its neural basis have revealed a consistent and distributed network of associated brain regions, surprisingly little is known about the contributions individual brain areas make to the recollective experience. In this thesis I address this fundamental issue by employing a range of different experimental techniques including neuropsychological testing, virtual reality environments, whole brain and high spatial resolution fMRI, and multivariate pattern analysis. Episodic memory recall is widely agreed to be a reconstructive process, one that is known to be critically reliant on the hippocampus. I therefore hypothesised that the same neural machinery responsible for reconstruction might also support ‘constructive’ cognitive functions such as imagination. To test this proposal, patients with focal damage to the hippocampus bilaterally were asked to imagine new experiences and were found to be impaired relative to matched control participants. Moreover, driving this deficit was a lack of spatial coherence in their imagined experiences, pointing to a role for the hippocampus in binding together the disparate elements of a scene. A subsequent fMRI study involving healthy participants compared the recall of real memories with the construction of imaginary memories. This revealed a fronto-temporo-parietal network in common to both tasks that included the hippocampus, ventromedial prefrontal, retrosplenial and parietal cortices. Based on these results I advanced the notion that this network might support the process of ‘scene construction’, defined as the generation and maintenance of a complex and coherent spatial context. Furthermore, I argued that this scene construction network might underpin other important cognitive functions besides episodic memory and imagination, such as navigation and thinking about the future. It is has been proposed that spatial context may act as the scaffold around which episodic memories are built. Given the hippocampus appears to play a critical role in imagination by supporting the creation of a rich coherent spatial scene, I sought to explore the nature of this hippocampal spatial code in a novel way. By combining high spatial resolution fMRI with multivariate pattern analysis techniques it proved possible to accurately determine where a subject was located in a virtual reality environment based solely on the pattern of activity across hippocampal voxels. For this to have been possible, the hippocampal population code must be large and non-uniform. I then extended these techniques to the domain of episodic memory by showing that individual memories could be accurately decoded from the pattern of activity across hippocampal voxels, thus identifying individual memory traces. I consider these findings together with other recent advances in the episodic memory field, and present a new perspective on the role of the hippocampus in episodic recollection. I discuss how this new (and preliminary) framework compares with current prevailing theories of hippocampal function, and suggest how it might account for some previously contradictory data

    Balancing Prediction and Sensory Input in Speech Comprehension: The Spatiotemporal Dynamics of Word Recognition in Context.

    Get PDF
    Spoken word recognition in context is remarkably fast and accurate, with recognition times of ∼200 ms, typically well before the end of the word. The neurocomputational mechanisms underlying these contextual effects are still poorly understood. This study combines source-localized electroencephalographic and magnetoencephalographic (EMEG) measures of real-time brain activity with multivariate representational similarity analysis to determine directly the timing and computational content of the processes evoked as spoken words are heard in context, and to evaluate the respective roles of bottom-up and predictive processing mechanisms in the integration of sensory and contextual constraints. Male and female human participants heard simple (modifier-noun) English phrases that varied in the degree of semantic constraint that the modifier (W1) exerted on the noun (W2), as in pairs, such as "yellow banana." We used gating tasks to generate estimates of the probabilistic predictions generated by these constraints as well as measures of their interaction with the bottom-up perceptual input for W2. Representation similarity analysis models of these measures were tested against electroencephalographic and magnetoencephalographic brain data across a bilateral fronto-temporo-parietal language network. Consistent with probabilistic predictive processing accounts, we found early activation of semantic constraints in frontal cortex (LBA45) as W1 was heard. The effects of these constraints (at 100 ms after W2 onset in left middle temporal gyrus and at 140 ms in left Heschl's gyrus) were only detectable, however, after the initial phonemes of W2 had been heard. Within an overall predictive processing framework, bottom-up sensory inputs are still required to achieve early and robust spoken word recognition in context.SIGNIFICANCE STATEMENT Human listeners recognize spoken words in natural speech contexts with remarkable speed and accuracy, often identifying a word well before all of it has been heard. In this study, we investigate the brain systems that support this important capacity, using neuroimaging techniques that can track real-time brain activity during speech comprehension. This makes it possible to locate the brain areas that generate predictions about upcoming words and to show how these expectations are integrated with the evidence provided by the speech being heard. We use the timing and localization of these effects to provide the most specific account to date of how the brain achieves an optimal balance between prediction and sensory input in the interpretation of spoken language

    The Neural Representation of Scenes in Visual Cortex

    Get PDF
    Recent neuroimaging studies have identified a number of regions in the human brain that respond preferentially to visual scenes. These regions are thought to underpin our ability to perceive and interact with our local visual environment. However, the precise stimulus dimensions underlying the function of scene-selective regions remain controversial. Some accounts have proposed an organisation based on relatively high-level semantic or categorical properties of the stimulus. However, other accounts have suggested that lower-level visual features of the stimulus may offer a more parsimonious explanation. This thesis presents a series of fMRI experiments employing multivariate pattern analyses (MVPA) in order to test the role of low-level visual properties in the function of scene-selective regions. The first empirical chapter presents two experiments showing that patterns of neural response to different scene categories can be predicted by a model of the visual properties of scenes (GIST). The second empirical chapter demonstrates that direct manipulations of the spatial frequency content of the image significantly influence the patterns of response, with effects often being comparable to or greater than those of scene category. The third empirical chapter demonstrates that distinct patterns of response can be found to different scene categories even when images are Fourier phase scrambled such that low-level visual features are preserved, but perception of the categories is impaired. The fourth and final empirical chapter presents an experiment using a data-driven method to select clusters of scenes objectively based on their visual properties. These visually defined clusters did not correspond to typical scene categories, but nevertheless elicited distinct patterns of neural response. Taken together, these results support the importance of low-level visual features in the functional organisation of scene-selective regions. Scene-selective responses may arise from the combined sensitivity to multiple visual features that are themselves predictive of scene content
    corecore