150 research outputs found

    The neural coding of properties shared by faces, bodies and objects

    Get PDF
    Previous studies have identified relatively separated regions of the brain that respond strongly when participants view images of either faces, bodies or objects. The aim of this thesis was to investigate how and where in the brain shared properties of faces, bodies and objects are processed. We selected three properties that are shared by faces and bodies, shared categories (sex and weight), shared identity and shared orientation (i.e. facing direction). We also investigated one property shared by faces and objects, the tendency to process a face or object as a whole rather than by its parts, which is known as holistic processing. We hypothesized that these shared properties might be encoded separately for faces, bodies and objects in the previously defined domain-specific regions, or alternatively that they might be encoded in an overlapping or shared code in those or other regions. In all of studies in this thesis, we used fMRI to record the brain activity of participants viewing images of faces and bodies or objects that showed differences in the shared properties of interest. We then investigated the neural responses these stimuli elicited in a variety of specifically localized brain regions responsive to faces, bodies or objects, as well as across the whole-brain. Our results showed evidence for a mix of overlapping coding, shared coding and domain-specific coding, depending on the particular property and the level of abstraction of its neural coding. We found we could decode face and body categories, identities and orientations from both face- and body-responsive regions showing that these properties are encoded in overlapping brain regions. We also found that non-domain specific brain regions are involved in holistic face processing. We identified shared coding of orientation and weight in the occipital cortex and shared coding of identity in the early visual cortex, right inferior occipital cortex, right parahippocampal cortex and right superior parietal cortex, demonstrating that a variety of brain regions combine face and body information into a common code. In contrast to these findings, we found evidence that high-level visual transformations may be predominantly processed in domain-specific regions, as we could most consistently decode body categories across image-size and body identity across viewpoint from body-responsive regions. In conclusion, this thesis furthers our understanding of the neural coding of face, body and object properties and gives new insights into the functional organisation of occipitotemporal cortex

    Fast temporal dynamics and causal relevance of face processing in the human temporal cortex

    Get PDF
    We measured the fast temporal dynamics of face processing simultaneously across the human temporal cortex (TC) using intracranial recordings in eight participants. We found sites with selective responses to faces clustered in the ventral TC, which responded increasingly strongly to marine animal, bird, mammal, and human faces. Both face-selective and face-active but non-selective sites showed a posterior to anterior gradient in response time and selectivity. A sparse model focusing on information from the human face-selective sites performed as well as, or better than, anatomically distributed models when discriminating faces from non-faces stimuli. Additionally, we identified the posterior fusiform site (pFUS) as causally the most relevant node for inducing distortion of conscious face processing by direct electrical stimulation. These findings support anatomically discrete but temporally distributed response profiles in the human brain and provide a new common ground for unifying the seemingly contradictory modular and distributed modes of face processing

    Separated and overlapping neural coding of face and body identity

    Get PDF
    Recognising a person's identity often relies on face and body information, and is tolerant to changes in low-level visual input (e.g., viewpoint changes). Previous studies have suggested that face identity is disentangled from low-level visual input in the anterior face-responsive regions. It remains unclear which regions disentangle body identity from variations in viewpoint, and whether face and body identity are encoded separately or combined into a coherent person identity representation. We trained participants to recognise three identities, and then recorded their brain activity using fMRI while they viewed face and body images of these three identities from different viewpoints. Participants' task was to respond to either the stimulus identity or viewpoint. We found consistent decoding of body identity across viewpoint in the fusiform body area, right anterior temporal cortex, middle frontal gyrus and right insula. This finding demonstrates a similar function of fusiform and anterior temporal cortex for bodies as has previously been shown for faces, suggesting these regions may play a general role in extracting high-level identity information. Moreover, we could decode identity across fMRI activity evoked by faces and bodies in the early visual cortex, right inferior occipital cortex, right parahippocampal cortex and right superior parietal cortex, revealing a distributed network that encodes person identity abstractly. Lastly, identity decoding was consistently better when participants attended to identity, indicating that attention to identity enhances its neural representation. These results offer new insights into how the brain develops an abstract neural coding of person identity, shared by faces and bodies

    The neural coding of face and body orientation in occipitotemporal cortex

    Get PDF
    Face and body orientation convey important information for us to understand other people's actions, intentions and social interactions. It has been shown that several occipitotemporal areas respond differently to faces or bodies of different orientations. However, whether face and body orientation are processed by partially overlapping or completely separate brain networks remains unclear, as the neural coding of face and body orientation is often investigated separately. Here, we recorded participants’ brain activity using fMRI while they viewed faces and bodies shown from three different orientations, while attending to either orientation or identity information. Using multivoxel pattern analysis we investigated which brain regions process face and body orientation respectively, and which regions encode both face and body orientation in a stimulus-independent manner. We found that patterns of neural responses evoked by different stimulus orientations in the occipital face area, extrastriate body area, lateral occipital complex and right early visual cortex could generalise across faces and bodies, suggesting a stimulus-independent encoding of person orientation in occipitotemporal cortex. This finding was consistent across functionally defined regions of interest and a whole-brain searchlight approach. The fusiform face area responded to face but not body orientation, suggesting that orientation responses in this area are face-specific. Moreover, neural responses to orientation were remarkably consistent regardless of whether participants attended to the orientation of faces and bodies or not. Together, these results demonstrate that face and body orientation are processed in a partially overlapping brain network, with a stimulus-independent neural code for face and body orientation in occipitotemporal cortex

    Towards a model of human body perception

    Get PDF
    Item does not contain fulltextFrom just a glimpse of another person, we make inferences about their current states and longstanding traits. These inferences are normally spontaneous and effortless, yet they are crucial in shaping our impressions and behaviours towards other people. What are the perceptual operations involved in the rapid extraction of socially relevant information? To answer this question, over the last decade the visual and cognitive neuroscience of social stimuli has received new inputs through emerging proposals of social vision approaches. Perhaps by function of these contributions, researchers have reached a certain degree of consensus over a standard model of face perception. This thesis aims to extend social vision approaches to the case of human body perception. In doing so, it establishes the building blocks for a perceptual model of the human body which integrates the extraction of socially relevant information from the appearance of the body. Using visual tasks, the data show that perceptual representations of the human body are sensitive to socially relevant information (e.g. sex, weight, emotional expression). Specifically, in the first empirical chapter I dissect the perceptual representations of body sex. Using a visual search paradigm, I demonstrate a differential and asymmetrical representation of sex from human body shape. In the second empirical chapter, using the Garner selective attention task, I show that the dimension of body sex is independent from the information of emotional body postures. Finally, in the third empirical chapter, I provide evidence that category selective visual brain regions, including the body selective region EBA, are directly involved in forming perceptual expectations towards incoming visual stimuli. Socially relevant information of the body might shape visual representations of the body by acting as a set of expectancies available to the observer during perceptual operations. In the general discussion I address how the findings of the empirical chapters inform us about the perceptual encoding of human body shape. Further, I propose how these results provide the initial steps for a unified social vision model of human body perception. Finally, I advance the hypothesis that rapid social categorisation during perception is explained by mechanisms generally affecting the perceptual analysis of objects under naturalistic conditions (e.g. expectations-expertise) operating within the social domain.Bangor University, 17 februari 2020Promotor : Downing, P.E. Co-promotor : Koldewyn, K.182 p

    Neural correlates of hand-tool interaction

    Get PDF
    Background: The recent advent of non-invasive functional magnetic resonance image (fMRI) has helped us understand how visual information is processed in the visual system, and the functional organising principles of high-order visual areas beyond striate cortex. In particular, evidence has been reported for a constellation of high-order visual areas that are highly specialised for the visual processing of different object domains such as faces, bodies, and tools. A number of accounts of the underlying principle of functional specialisation in high-order visual cortex propose that visual properties and object domain drive the category selectivity of these areas. However, recent evidence has challenged such accounts, showing that non-visual object properties and connectivity constraints between specialised brain networks can, in part, account for the visual system’s functional organisation. Methodology: Here I will use fMRI to examine how areas along the visual ventral stream and dorsal action stream process visually presented hands and tools. These categories are visually dissimilar but share similar functions. By using different statistical analyses, such as univariate group and single-subject region of interest (ROI) analyses, multivariate multivoxel pattern analyses, and functional connectivity analyses, I will investigate the topics of category-selectivity and the principles underlying the organisation of high-order visual areas in left occipitotemporal and left parietal cortex. Principle Findings: In the first part of this thesis I report novel evidence that, similar to socially relevant faces and bodies, the human high-order visual areas in left occipitotemporal and left parietal cortex houses areas that are selective for the visual processing of human hands. In the second part of this thesis, I show that the visual representation of hands and tools in these areas show large anatomical overlap and high similarity in the response patterns to these categories. As hands and tools differ in visual appearance and object domain yet share action-related properties, the results demonstrate that these category-selective responses in the visual system reflect responses to non-visual action-related object properties common to hands and tools rather than to purely visual properties or object domain. This proposition is further supported by evidence of selective functional connectivity patterns between hand/tool occipitotemporal and parietal areas. Conclusions/Significance: Overall these results indicate that high-order visual cortex is functionally organised to process both visual properties and non-visual object dimensions (e.g., action-related properties). I propose that this correspondence between hand and tool representations in ventral ‘visual’ and parietal ‘action’ areas is constrained by the necessity to connect visual object information to functionally-specific downstream networks (e.g., frontoparietal action network) to facilitate hand-tool action-related processing

    Extracting scene and object information from natural stimuli: the influence of scene structure and eye movements

    Get PDF
    When we observe a scene in our daily lives, our brains seemingly effortlessly extract various aspects of that scene. This can be attributed to different aspects of the human visual system, including but not limited to (1) its tuning to natural regularities in scenes and (2) its ability to bring different parts of the visual environment into focus via eye movements. While eye movements are a ubiquitous and natural behavior, they are considered undesirable in many highly controlled visual experiments. Participants are often instructed to fixate but cannot always suppress involuntary eye movements, which can challenge the interpretation of neuroscientific data, in particular for magneto- and electroencephalography (M/EEG). This dissertation addressed how scene structure and involuntary eye movements influence the extraction of scene and object information from natural stimuli. First, we investigated when and where real-world scene structure affects scene-selective cortical responses. Second, we investigated whether spatial structure facilitates the temporal analysis of a scene’s categorical content. Third, we investigated whether the spatial content of a scene aids in extracting task-relevant object information. Fourth, we explored whether the choice of fixation cross influences eye movements and the classification of natural images from EEG and eye tracking. The first project showed that spatial scene structure impacts scene-selective neural responses in OPA and PPA, revealing genuine sensitivity to spatial scene structure starting from 255 ms, while scene-selective neural responses are less sensitive to categorical scene structure. The second project demonstrated that spatial scene structure facilitates the extraction of the scene’s categorical content within 200 ms of vision. The third project showed that coherent scene structure facilitates the extraction of object information if the object is task-relevant, suggesting a task-based modulation. The fourth project showed that choosing a centrally presented bullseye instead of a standard fixation cross reduces eye movements on the single image level and subtly removes systematic eye movement related activity in M/EEG data. Taken together, the results advanced our understanding of (1) the impact of real-world structure on scene perception as well as the extraction of object information and (2) the influence of eye movements on advanced analysis methods.Wenn wir in unserem täglichen Leben eine Szene beobachten, extrahiert unser Gehirn scheinbar mühelos verschiedene Aspekte dieser Szene. Dies kann auf verschiedene Aspekte des menschlichen Sehsystems zurückgeführt werden, unter anderem auf (1) seine Ausrichtung auf natürliche Regelmäßigkeiten in Szenen und (2) seine Fähigkeit, verschiedene Teile der visuellen Umgebung durch Augenbewegungen in den Fokus zu bringen. Obwohl Augenbewegungen ein allgegenwärtiges und natürliches Verhalten sind, werden sie in vielen stark kontrollierten visuellen Experimenten als unerwünscht angesehen. Die Teilnehmer werden oft angewiesen, zu fixieren, können aber unwillkürliche Augenbewegungen nicht immer unterdrücken, was die Interpretation neurowissenschaftlicher Daten, insbesondere der Magneto- und Elektroenzephalographie (M/EEG), in Frage stellen kann. In dieser Dissertation wurde untersucht, wie Szenenstruktur und unbewusste Augenbewegungen die Extraktion von Szenen- und Objektinformationen aus natürlichen Stimuli beeinflussen. Zunächst untersuchten wir, wann und wo die Struktur einer realen Szene die szenenselektiven kortikalen Reaktionen beeinflusst. Zweitens untersuchten wir, ob die räumliche Struktur die zeitliche Analyse des kategorialen Inhalts einer Szene erleichtert. Drittens untersuchten wir, ob der räumliche Inhalt einer Szene bei der Extraktion aufgabenrelevanter Objektinformationen hilft. Viertens untersuchten wir, ob die Wahl des Fixationskreuzes die Augenbewegungen und die Klassifizierung natürlicher Bilder aus EEG und Eye-Tracking beeinflusst. Das erste Projekt zeigte, dass sich die räumliche Szenenstruktur auf szenenselektive neuronale Reaktionen in OPA und PPA auswirkt, wobei eine echte Empfindlichkeit für räumliche Szenenstrukturen ab 255 ms festgestellt wurde, während szenenselektive neuronale Reaktionen weniger empfindlich auf kategoriale Szenenstrukturen reagieren. Das zweite Projekt zeigte, dass die räumliche Szenenstruktur die Extraktion des kategorialen Inhalts der Szene innerhalb von 200 ms nach dem Sehen erleichtert. Das dritte Projekt zeigte, dass eine kohärente Szenenstruktur die Extraktion von Objektinformationen erleichtert, wenn das Objekt aufgabenrelevant ist, was auf eine aufgabenbezogene Modulation hindeutet. Das vierte Projekt zeigte, dass die Wahl eines zentral präsentierten Bullauges anstelle eines Standard-Fixationskreuzes Augenbewegungen auf Einzelbildebene reduziert und systematische Augenbewegungsaktivität in M/EEG-Daten auf subtile Weise beseitigt. Zusammengenommen haben die Ergebnisse unser Verständnis (1) der Auswirkungen der Struktur der realen Welt auf die Wahrnehmung der Szene und die Extraktion von Objektinformationen und (2) des Einflusses von Augenbewegungen auf fortgeschrittene Analysemethoden verbessert

    Physiology and neuroanatomy of emotional reactivity in frontotemporal dementia

    Get PDF
    ABSTRACT AND SUMMARY OF EXPERIMENTAL FINDINGS The frontotemporal dementias (FTD) are a heterogeneous group of neurodegenerative diseases that cause variable profiles of fronto-insulo-temporal network disintegration. Loss of empathy and dysfunctional social interaction are a leading features of FTD and major determinants of care burden, but remain poorly understood and difficult to measure with conventional neuropsychological instruments. Building on a large body of work in the healthy brain showing that embodied responses are important components of emotional responses and empathy, I performed a series of experiments to examine the extent to which the induction and decoding of somatic physiological responses to the emotions of others are degraded in FTD, and to define the underlying neuroanatomical changes responsible for these deficits. I systematically studied a range of modalities across the entire syndromic spectrum of FTD, including daily life emotional sensitivity, the cognitive categorisation of emotions, interoceptive accuracy, automatic facial mimicry, autonomic responses, and structural and functional neuroanatomy to deconstruct aberrant emotional reactivity in these diseases. My results provide proof of principle for the utility of physiological measures in deconstructing complex socioemotional symptoms and suggest that these warrant further investigation as clinical biomarkers in FTD. Chapter 3: Using a heartbeat counting task, I found that interoceptive accuracy is impaired in semantic variant primary progressive aphasia, but correlates with sensitivity to the emotions of others across FTD syndromes. Voxel based morphometry demonstrated that impaired interoceptive accuracy correlates with grey matter volume in anterior cingulate, insula and amygdala. Chapter 4: Using facial electromyography to index automatic imitation, I showed that mimicry of emotional facial expressions is impaired in the behavioural and right temporal variants of FTD. Automatic imitation predicted correct identification of facial emotions in healthy controls and syndromes focussed on the frontal lobes and insula, but not in syndromes focussed on the temporal lobes, suggesting that automatic imitation aids emotion recognition only when social concepts and semantic stores are intact. Voxel based morphometry replicated previously identified neuroanatomical correlates of emotion identification ability, while automatic imitation was associated with grey matter volume in a visuomotor network including primary visual and motor cortices, visual motion area (MT/V5) and supplementary motor cortex. Chapter 5: By recording heart rate during viewing of facial emotions, I showed that the normal cardiac reactivity to emotion is impaired in FTD syndromes with fronto-insular atrophy (behavioural variant FTD and nonfluent variant primary progressive aphasia) but not in syndromes focussed on the temporal lobes (right temporal variant FTD and semantic variant primary progressive aphasia). Unlike automatic imitation, cardiac reactivity dissociated from emotion identification ability. Voxel based morphometry revealed grey matter correlates of cardiac reactivity in anterior cingulate, insula and orbitofrontal cortex. Chapter 6: Subjects viewed videos of facial emotions during fMRI scanning, with concomitant recording of heart rate and pupil size. I identified syndromic profiles of reduced activity in posterior face responsive regions including posterior superior temporal sulcus and fusiform face area. Emotion identification ability was predicted by activity in more anterior areas including anterior cingulate, insula, inferior frontal gyrus and temporal pole. Autonomic reactivity related to activity in both components of the central autonomic control network and regions responsible for processing the sensory properties of the stimuli
    corecore