78 research outputs found

    Sharing Space: The Presence of Other Bodies Extends the Space Judged as Near

    Get PDF
    Background: As social animals we share the space with other people. It is known that perceived extension of the peripersonal space (the reaching space) is affected by the implicit representation of our own and other's action potentialities. Our issue concerns whether the co-presence of a body in the scene influences our extrapersonal space (beyond reaching distance) categorization. Methodology/Principal Findings: We investigated, through 3D virtual scenes of a realistic environment, whether egocentric spatial categorization can be influenced by the presence of another human body (Exp. 1) and whether the effect is due to her action potentialities or simply to her human-like morphology (Exp. 2). Subjects were asked to judge the location ("Near" or "Far") of a target object located at different distances from their egocentric perspective. In Exp. 1, the judgment was given either in presence of a virtual avatar (Self-with-Other), or a non-corporeal object (Self-with-Object) or nothing (Self). In Exp. 2, the Self condition was replaced by a Self-with-Dummy condition, in which an inanimate body (a wooden dummy) was present. Mean Judgment Transition Thresholds (JTTs) were calculated for each subject in each experimental condition. Self-with-Other condition induced a significant extension of the space judged as "Near" as compared to both the Selfwith- Object condition and the Self condition. Such extension was observed also in Exp. 2 in the Self-with-Dummy condition. Results suggest that the presence of others impacts on our perception of extrapersonal space. This effect holds also when the other is a human-like wooden dummy, suggesting that structural and morphological shapes resembling human bodies are sufficient conditions for the effect to occur. Conclusions: The observed extension of the portion of space judged as near could represent a wider portion of "accessible" space, thus an advantage in the struggle to survive in presence of other potential competing individuals

    Body Context and Posture Affect Mental Imagery of Hands

    Get PDF
    Different visual stimuli have been shown to recruit different mental imagery strategies. However the role of specific visual stimuli properties related to body context and posture in mental imagery is still under debate. Aiming to dissociate the behavioural correlates of mental processing of visual stimuli characterized by different body context, in the present study we investigated whether the mental rotation of stimuli showing either hands as attached to a body (hands-on-body) or not (hands-only), would be based on different mechanisms. We further examined the effects of postural changes on the mental rotation of both stimuli. Thirty healthy volunteers verbally judged the laterality of rotated hands-only and hands-on-body stimuli presented from the dorsum- or the palm-view, while positioning their hands on their knees (front postural condition) or behind their back (back postural condition). Mental rotation of hands-only, but not of hands-on-body, was modulated by the stimulus view and orientation. Additionally, only the hands-only stimuli were mentally rotated at different speeds according to the postural conditions. This indicates that different stimulus-related mechanisms are recruited in mental rotation by changing the bodily context in which a particular body part is presented. The present data suggest that, with respect to hands-only, mental rotation of hands-on-body is less dependent on biomechanical constraints and proprioceptive input. We interpret our results as evidence for preferential processing of visual- rather than kinesthetic-based mechanisms during mental transformation of hands-on-body and hands-only, respectively

    No advantage for remembering horizontal over vertical spatial locations learned from a single viewpoint

    Get PDF
    Previous behavioral and neurophysiological research has shown better memory for horizontal than for vertical locations. In these studies, participants navigated toward these locations. In the present study we investigated whether the orientation of the spatial plane per se was responsible for this difference. We thus had participants learn locations visually from a single perspective and retrieve them from multiple viewpoints. In three experiments, participants studied colored tags on a horizontally or vertically oriented board within a virtual room and recalled these locations with different layout orientations (Exp. 1) or from different room-based perspectives (Exps. 2 and 3). All experiments revealed evidence for equal recall performance in horizontal and vertical memory. In addition, the patterns for recall from different test orientations were rather similar. Consequently, our results suggest that memory is qualitatively similar for both vertical and horizontal two-dimensional locations, given that these locations are learned from a single viewpoint. Thus, prior differences in spatial memory may have originated from the structure of the space or the fact that participants navigated through it. Additionally, the strong performance advantages for perspective shifts (Exps. 2 and 3) relative to layout rotations (Exp. 1) suggest that configurational judgments are not only based on memory of the relations between target objects, but also encompass the relations between target objects and the surrounding room—for example, in the form of a memorized view

    Online prediction of others’ actions: the contribution of the target object, action context and movement kinematics

    Get PDF
    Previous research investigated the contributions of target objects, situational context and movement kinematics to action prediction separately. The current study addresses how these three factors combine in the prediction of observed actions. Participants observed an actor whose movements were constrained by the situational context or not, and object-directed or not. After several steps, participants had to indicate how the action would continue. Experiment 1 shows that predictions were most accurate when the action was constrained and object-directed. Experiments 2A and 2B investigated whether these predictions relied more on the presence of a target object or cues in the actor’s movement kinematics. The target object was artificially moved to another location or occluded. Results suggest a crucial role for kinematics. In sum, observers predict actions based on target objects and situational constraints, and they exploit subtle movement cues of the observed actor rather than the direct visual information about target objects and context

    The emergence of semantic categorization in early visual processing: ERP indices of animal vs. artifact recognition

    Get PDF
    BACKGROUND: Neuroimaging and neuropsychological literature show functional dissociations in brain activity during processing of stimuli belonging to different semantic categories (e.g., animals, tools, faces, places), but little information is available about the time course of object perceptual categorization. The aim of the study was to provide information about the timing of processing stimuli from different semantic domains, without using verbal or naming paradigms, in order to observe the emergence of non-linguistic conceptual knowledge in the ventral stream visual pathway. Event related potentials (ERPs) were recorded in 18 healthy right-handed individuals as they performed a perceptual categorization task on 672 pairs of images of animals and man-made objects (i.e., artifacts). RESULTS: Behavioral responses to animal stimuli were ~50 ms faster and more accurate than those to artifacts. At early processing stages (120–180 ms) the right occipital-temporal cortex was more activated in response to animals than to artifacts as indexed by posterior N1 response, while frontal/central N1 (130–160) showed the opposite pattern. In the next processing stage (200–260) the response was stronger to artifacts and usable items at anterior temporal sites. The P300 component was smaller, and the central/parietal N400 component was larger to artifacts than to animals. CONCLUSION: The effect of animal and artifact categorization emerged at ~150 ms over the right occipital-temporal area as a stronger response of the ventral stream to animate, homomorphic, entities with faces and legs. The larger frontal/central N1 and the subsequent temporal activation for inanimate objects might reflect the prevalence of a functional rather than perceptual representation of manipulable tools compared to animals. Late ERP effects might reflect semantic integration and cognitive updating processes. Overall, the data are compatible with a modality-specific semantic memory account, in which sensory and action-related semantic features are represented in modality-specific brain areas

    The cognitive neuroscience of prehension: recent developments

    Get PDF
    Prehension, the capacity to reach and grasp, is the key behavior that allows humans to change their environment. It continues to serve as a remarkable experimental test case for probing the cognitive architecture of goal-oriented action. This review focuses on recent experimental evidence that enhances or modifies how we might conceptualize the neural substrates of prehension. Emphasis is placed on studies that consider how precision grasps are selected and transformed into motor commands. Then, the mechanisms that extract action relevant information from vision and touch are considered. These include consideration of how parallel perceptual networks within parietal cortex, along with the ventral stream, are connected and share information to achieve common motor goals. On-line control of grasping action is discussed within a state estimation framework. The review ends with a consideration about how prehension fits within larger action repertoires that solve more complex goals and the possible cortical architectures needed to organize these actions

    Getting a grip on sensorimotor effects in lexical-semantic processing

    Get PDF
    One of the strategies that researchers have used to investigate the role of sensorimotor information in lexical-semantic processing is to examine effects of words’ rated body-object interaction (BOI; the ease with which the human body can interact with a word’s referent). Processing tends to be facilitated for words with high BOI compared to words with low BOI, across a wide variety of tasks. Such effects have been referenced in debates over the nature of semantic representations, but their theoretical import has been limited by the fact that BOI is a fairly coarse measure of sensorimotor experience with words’ referents. In the present study we collected ratings for 621 words on seven semantic dimensions (graspability, ease of pantomime, number of actions, animacy, size, danger, and usefulness) in order to investigate which attributes are most strongly related to BOI ratings, and to lexical-semantic processing. BOI ratings were obtained from previous norming studies (Bennett, Burnett, Siakaluk, & Pexman, 2011; Tillotson, Siakaluk, & Pexman, 2008) and measures of lexical-semantic processing were obtained from previous behavioural megastudies involving the semantic categorization task (concrete/abstract decision; Pexman, Heard, Lloyd, & Yap, 2017) and the lexical decision task (Balota et al., 2007). Results showed that the motor dimension of graspability, ease of pantomime, and number of actions were all related to BOI and that these dimensions together explained more variance in semantic processing than did BOI ratings alone. These ratings will be useful for researchers who wish to study how different kinds of bodily interactions influence lexical-semantic processing and cognition

    Learning new sensorimotor contingencies:Effects of long-term use of sensory augmentation on the brain and conscious perception

    Get PDF
    Theories of embodied cognition propose that perception is shaped by sensory stimuli and by the actions of the organism. Following sensorimotor contingency theory, the mastery of lawful relations between own behavior and resulting changes in sensory signals, called sensorimotor contingencies, is constitutive of conscious perception. Sensorimotor contingency theory predicts that, after training, knowledge relating to new sensorimotor contingencies develops, leading to changes in the activation of sensorimotor systems, and concomitant changes in perception. In the present study, we spell out this hypothesis in detail and investigate whether it is possible to learn new sensorimotor contingencies by sensory augmentation. Specifically, we designed an fMRI compatible sensory augmentation device, the feelSpace belt, which gives orientation information about the direction of magnetic north via vibrotactile stimulation on the waist of participants. In a longitudinal study, participants trained with this belt for seven weeks in natural environment. Our EEG results indicate that training with the belt leads to changes in sleep architecture early in the training phase, compatible with the consolidation of procedural learning as well as increased sensorimotor processing and motor programming. The fMRI results suggest that training entails activity in sensory as well as higher motor centers and brain areas known to be involved in navigation. These neural changes are accompanied with changes in how space and the belt signal are perceived, as well as with increased trust in navigational ability. Thus, our data on physiological processes and subjective experiences are compatible with the hypothesis that new sensorimotor contingencies can be acquired using sensory augmentation
    corecore