39 research outputs found

    From Head to Toe: Evidence for Selective Brain Activation Reflecting Visual Perception of Whole Individuals

    Get PDF
    Our ability to recognize other people’s faces and bodies is crucial for our social interactions. Previous neuroimaging studies have repeatedly demonstrated the existence of brain areas that selectively respond to visually presented faces and bodies. In daily life, however, we see “whole” people and not just isolated faces and bodies, and the question remains of how information from these two categories of stimuli is integrated at a neural level. Are faces and bodies merely processed independently, or are there neural populations that actually code for whole individuals? In the current study we addressed this question using a functional magnetic resonance imaging adaptation paradigm involving the sequential presentation of visual stimuli depicting whole individuals. It is known that adaptation effects for a component of a stimulus only occur in neural populations that are sensitive to that particular component. The design of our experiment allowed us to measure adaptation effects occurring when either just the face, just the body, or both the face and the body of an individual were repeated. Crucially, we found novel evidence for the existence of neural populations in fusiform as well as extrastriate regions that showed selective adaptation for whole individuals, which could not be merely explained by the sum of adaptation for face and body respectively. The functional specificity of these neural populations is likely to support fast and accurate recognition and integration of information conveyed by both faces and bodies. Hence, they can be assumed to play an important role for identity as well as emotion recognition in everyday life

    Revisiting the link between body and agency: visual movement congruency enhances intentional binding but is not body-specific.

    Get PDF
    Embodiment and agency are key aspects of how we perceive ourselves that have typically been associated with independent mechanisms. Recent work, however, has suggested that these mechanisms are related. The sense of agency arises from recognising a causal influence on the external world. This influence is typically realised through bodily movements and thus the perception of the bodily self could also be crucial for agency. We investigated whether a key index of agency - intentional binding - was modulated by body-specific information. Participants judged the interval between pressing a button and a subsequent tone. We used virtual reality to manipulate two aspects of movement feedback. First, form: participants viewed a virtual hand or sphere. Second, movement congruency: the viewed object moved congruently or incongruently with the participant's hidden hand. Both factors, form and movement congruency, significantly influenced embodiment. However, only movement congruency influenced intentional binding. Binding was increased for congruent compared to incongruent movement feedback irrespective of form. This shows that the comparison between viewed and performed movements provides an important cue for agency, whereas body-specific visual form does not. We suggest that embodiment and agency mechanisms both depend on comparisons across sensorimotor signals but that they are influenced by distinct factors

    Investigating body distortions in Anorexia Nervosa with action-based paradigms

    No full text

    Preference for orientations commonly viewed for one's own hand in the anterior intraparietal cortex

    Get PDF
    Brain regions in the intraparietal and the premotor cortices selectively process visual and multisensory events near the hands (peri-hand space). Visual information from the hand itself modulates this processing potentially because it is used to estimate the location of one's own body and the surrounding space. In humans specific occipitotemporal areas process visual information of specific body parts such as hands. Here we used an fMRI block-design to investigate if anterior intraparietal and ventral premotor 'peri-hand areas' exhibit selective responses to viewing images of hands and viewing specific hand orientations. Furthermore, we investigated if the occipitotemporal 'hand area' is sensitive to viewed hand orientation. Our findings demonstrate increased BOLD responses in the left anterior intraparietal area when participants viewed hands and feet as compared to faces and objects. Anterior intraparietal and also occipitotemporal areas in the left hemisphere exhibited response preferences for viewing right hands with orientations commonly viewed for one's own hand as compared to uncommon own hand orientations. Our results indicate that both anterior intraparietal and occipitotemporal areas encode visual limb-specific shape and orientation information.9 page(s

    Representations of visual information with respect to body parts in the human brain : an fMRI study

    No full text
    Visual information of our own body and other bodies is an important source of information which guides our actions and perception. Using fMRI we study the location and characteristic of brain areas which process visual body information. The investigation of body- and body-part-selective brain areas has so far focused on areas in the occipitotemporal cortex [such as extrastriate body area(s) (EBA)]. In this study we specifically focus on the parietal cortex which is known to be involved in action planning and multi-sensory perception. In a first experiment, we employed an fMRI block design and presented images of hands, feet, objects and faces. We found that in addition to the known occipitotemporal areas, also parietal areas where more active when pictures of body parts where presented as compared to objects and faces. In a second experiment, we investigated the coding for body-part orientation in parietal and occipitotemporal body-partselective areas. The results from both experiments will be discussed.1 page(s

    Visual body form and orientation cues do not modulate visuo-tactile temporal integration.

    No full text
    Body ownership relies on spatiotemporal correlations between multisensory signals and visual cues specifying oneself such as body form and orientation. The mechanism for the integration of bodily signals remains unclear. One approach to model multisensory integration that has been influential in the multisensory literature is Bayesian causal inference. This specifies that the brain integrates spatial and temporal signals coming from different modalities when it infers a common cause for inputs. As an example, the rubber hand illusion shows that visual form and orientation cues can promote the inference of a common cause (one's body) leading to spatial integration shown by a proprioceptive drift of the perceived location of the real hand towards the rubber hand. Recent studies investigating the effect of visual cues on temporal integration, however, have led to conflicting findings. These could be due to task differences, variation in ecological validity of stimuli and/or small samples. In this pre-registered study, we investigated the influence of visual information on temporal integration using a visuo-tactile temporal order judgement task with realistic stimuli and a sufficiently large sample determined by Bayesian analysis. Participants viewed videos of a touch being applied to plausible or implausible visual stimuli for one's hand (hand oriented plausibly, hand rotated 180 degrees, or a sponge) while also being touched at varying stimulus onset asynchronies. Participants judged which stimulus came first: viewed or felt touch. Results show that visual cues do not modulate visuo-tactile temporal order judgements. This is not in line with the idea that bodily signals indicating oneself influence the integration of multisensory signals in the temporal domain. The current study emphasises the importance of rigour in our methodologies and analyses to advance the understanding of how properties of multisensory events affect the encoding of temporal information in the brain

    Evaluation of methods for detecting perfusion abnormalities after stroke in dysfunctional brain regions

    No full text
    Commonly, in lesion-behaviour studies structural changes in brain matter are depicted and analysed. However, in addition to these structural changes, brain areas might be structurally intact but non-functional due to malperfusion. These changes may be detected using perfusion- weighted MRI (PWI). Perfusion parameters most commonly used [e.g. time-to-peak (TTP)] are semi-quantitative and perfusion is evaluated in relation to a nonaffected reference area. Traditionally, the mean of a larger region in the non-affected hemisphere or the cerebellum has been used ["mean contra-region of interest (ROI) comparison"]. Our results suggest that this method is prone to biases (in particular in periventricular regions) because perfusion differs between different parts of the brain, for example, between grey and white matter. We reduced such potential biases with voxelwise inter-hemispheric comparisons. Each voxel is compared with its homologous voxel and thus white matter with white matter and grey matter with grey matter. This automated method seems to correspond with results deriving from manual delineation of perfusion deficits. The TTP delay maps with a threshold of 3 s seem to be best comparable to manual delineation. Our method avoids the observer-dependent choice of a reference region and involves the spatial normalisation of perfusion maps. It is well suited for whole-brain analysis of abnormal perfusion in neuroscience studies as well as in clinical contexts.9 page(s

    The Crossmodal congruency task as a means to obtain an objective behavioral measure in the rubber hand illusion paradigm

    No full text
    The rubber hand illusion (RHI) is a popular experimental paradigm. Participants view touch on an artificial rubber hand while the participants' own hidden hand is touched. If the viewed and felt touches are given at the same time then this is sufficient to induce the compelling experience that the rubber hand is one's own hand. The RHI can be used to investigate exactly how the brain constructs distinct body representations for one's own body. Such representations are crucial for successful interactions with the external world. To obtain a subjective measure of the RHI, researchers typically ask participants to rate statements such as "I felt as if the rubber hand were my hand". Here we demonstrate how the crossmodal congruency task can be used to obtain an objective behavioral measure within this paradigm. The variant of the crossmodal congruency task we employ involves the presentation of tactile targets and visual distractors. Targets and distractors are spatially congruent (i.e. same finger) on some trials and incongruent (i.e. different finger) on others. The difference in performance between incongruent and congruent trials - the crossmodal congruency effect (CCE) - indexes multisensory interactions. Importantly, the CCE is modulated both by viewing a hand as well as the synchrony of viewed and felt touch which are both crucial factors for the RHI. The use of the crossmodal congruency task within the RHI paradigm has several advantages. It is a simple behavioral measure which can be repeated many times and which can be obtained during the illusion while participants view the artificial hand. Furthermore, this measure is not susceptible to observer and experimenter biases. The combination of the RHI paradigm with the crossmodal congruency task allows in particular for the investigation of multisensory processes which are critical for modulations of body representations as in the RHI.6 page(s

    Crossmodal congruency measures of lateral distance effects on the rubber hand illusion

    No full text
    Body ownership for an artificial hand and the perceived position of one's own hand can be manipulated in the so-called rubber hand illusion. To induce this illusion, typically an artificial hand is placed next to the participant's body and stroked in synchrony with the real hand, which is hidden from view. Our first aim was to test if the crossmodal congruency task could be used to obtain a measure for the strength of body ownership in the rubber hand illusion. In this speeded location discrimination task participants responded to tactile targets presented to their index or middle finger, while trying to ignore irrelevant visual distracters placed on the artificial hand either on the congruent finger or on the incongruent finger. The difference between performance on congruent and incongruent trials (crossmodal congruency effect, CCE) indicates the amount of multisensory interactions between tactile targets and visual distracters. In order to investigate if changes in body ownership influence the CCE, we manipulated ownership for an artificial hand by synchronous and asynchronous stroking before the crossmodal congruency task (blocked design) in Experiment 1 and during the crossmodal congruency task (interleaved trial-by-trial design) in Experiment 2. Modulations of the CCE by ownership for an artificial hand were apparent in the interleaved trial-by-trial design. These findings suggest that the CCE can be used as an objective measure for body ownership. Secondly, we tested the hypothesis that the lateral spatial distance between the real hand and artificial hand limits the rubber hand illusion. We found no lateral spatial limits for the rubber hand illusion created by synchronous stroking within reaching distances. In conclusion, the sense of ownership seems to be related to modulations of multisensory interactions possibly through peripersonal space mechanisms, and these modulations do not appear to be limited by an increase in distance between artificial hand and real hand.13 page(s
    corecore