37 research outputs found
No correlation between distorted body representations underlying tactile distance perception and position sense
Both tactile distance perception and position sense are believed to require that
immediate afferent signals be referenced to a stored representation of body size and shape (the body model). For both of these abilities, recent studies have reported that the stored body representations involved are highly distorted, at least in the case of the hand, with the hand dorsum represented as wider and squatter than it actually is. Here, we investigated whether individual differences in the magnitude of these distortions are shared between tactile distance perception and position sense, as would be predicted by the hypothesis that a single distorted body model underlies both tasks. We used established tasks to measure distortions of the represented shape of the hand dorsum. Consistent with previous results, in both cases there were clear biases to overestimate distances oriented along the medio-lateral axis of the hand compared to the proximo- distal axis. Moreover, within each task there were clear split-half correlations, demonstrating that both tasks show consistent individual differences. Critically, however, there was no correlation between the magnitudes of distortion in the two tasks. This casts doubt on the proposal that a common body model underlies both tactile distance perception and position sense
The effects of instrumental action on perceptual hand maps
Perceiving the external spatial location of body parts using position sense requires that immediate proprioceptive afferent signals be integrated with information about body size and shape. Longo and Haggard (Proc Natl Acad Sci USA 107:11727–11732, 2010) developed a method to measure perceptual hand maps reflecting this metric information about body size and shape. In this paradigm, participants indicate the perceived location of landmarks on their occluded hand by pointing with a long baton held in their other hand. By comparing the relative location of judgments of different hand landmarks, perceptual hand maps can be constructed and compared to actual hand structure. The maps show large and highly stereotyped distortions. Here, I investigated the potential effect of biases related to active motor control of the hand doing the pointing in these distortions. Participants localized the fingertip and knuckle of each finger on their occluded left hand either by actively pointing with a baton held in their right hand (pointing condition) or by giving verbal commands to an experimenter on how to move the baton (verbal condition). Similar distortions were clearly apparent in both conditions, suggesting that they are not an artifact of motor control biases related to the pointing hand
Self and body part localization in virtual reality: comparing a headset and a large-screen immersive display
It is currently not fully understood where people precisely locate themselves in their bodies, particularly in virtual reality. To investigate this, we asked participants to point directly at themselves and to several of their body parts with a virtual pointer, in two virtual reality (VR) setups, a VR headset and a large-screen immersive display (LSID). There was a difference in distance error in pointing to body parts depending on VR setup. Participants pointed relatively accurately to many of their body parts (i.e. eyes, nose, chin, shoulders and waist). However, in both VR setups when pointing to the feet and the knees they pointed too low, and for the top of the head too high (to larger extents in the VR headset). Taking these distortions into account, the locations found for pointing to self were considered in terms of perceived bodies, based on where the participants had pointed to their body parts in the two VR setups. Pointing to self in terms of the perceived body was mostly to the face, the upper followed by the lower, as well as some to the torso regions. There was no significant overall effect of VR condition for pointing to self in terms of the perceived body (but there was a significant effect of VR if only the physical body (as measured) was considered). In a paper-and-pencil task outside of VR, performed by pointing on a picture of a simple body outline (body template task), participants pointed most to the upper torso. Possible explanations for the differences between pointing to self in the VR setups and the body template task are discussed. The main finding of this study is that the VR setup influences where people point to their body parts, but not to themselves, when perceived and not physical body parts are considered
Distorted body representations are robust to differences in experimental instructions
Several recent reports have shown that even healthy adults maintain highly distorted representations of the size and shape of their body. These distortions have been shown to be highly consistent across different study designs and dependent measures. However, previous studies have found that visual judgments of size can be modulated by the experimental instructions used, for example, by asking for judgments of the participant’s subjective experience of stimulus size (i.e., apparent instructions) versus judgments of actual stimulus properties (i.e., objective instructions). Previous studies investigating internal body representations have relied exclusively on ‘apparent’ instructions. Here, we investigated whether apparent versus objective instructions modulate findings of distorted body representations underlying position sense (Exp. 1), tactile distance perception (Exp. 2), as well as the conscious body image (Exp. 3). Our results replicate the characteristic distortions previously reported for each of these tasks and further show that these distortions are not affected by instruction type (i.e., apparent vs. objective). These results show that the distortions measured with these paradigms are robust to differences in instructions and do not reflect a dissociation between perception and belief
Understanding the nature of the body model underlying position sense
Accurate information about body structure and posture is fundamental for effective control of our actions. It is often assumed that healthy adults have accurate representations of their body. Although people's abilities to visually recognize their own body size and shape are relatively good, the implicit spatial representation of their body is extremely distorted when measured in proprioceptive localization tasks. The aim of this thesis is to understand the nature of spatial distortions of the body model measured in those localization tasks. We especially investigate the perceptual-cognitive components contributing to distortions of implicit representation of the human hand and compare those distortions with the one found on objects in similar tasks
Holistic Versus Analytic Perception of Indoor Spaces: Korean and German Cultural Differences in Comparative Judgments of Room Size
Egocentric biases in comparative volume judgments of rooms
The elongation of a figure or object can induce a perceptual bias regarding its area or volume estimation. This bias is notable in Piagetian experiments in which participants tend to consider elongated cylinders to contain more liquid than shorter cylinders of equal volume. We investigated whether similar perceptual biases could be found in volume judgments of surrounding indoor spaces and whether those judgments were viewpoint dependent. Participants compared a variety of computer-generated rectangular rooms with a square room in a psychophysical task. We found that the elongation bias in figures or objects was also present in volume comparison judgments of indoor spaces. Further, the direction of the bias (larger or smaller) depended on the observer's viewpoint. Similar results were obtained from a monoscopic computer display (Experiment 1) and stereoscopic head-mounted display with head tracking (Experiment 2). We used generalized linear mixed-effect models to model participants' volume judgments using a function of room depth and width. A good fit to the data was found when applying weight on the depth relative to the width, suggesting that participants' judgments were biased by egocentric properties of the space. We discuss how biases in comparative volume judgments of rooms might reflect the use of simplified strategies, such as anchoring on one salient dimension of the space
Implicit spatial representation of objects and hand size
Recent studies have investigated the body representation underlying tactile size perception and position sense. These studies have shown distorted hand representations consisting of an overestimation of hand width and an underestimation of finger length [Longo and Haggard, 2010, PNAS, 107(26), 11727- 11732]. Here, we are interested in whether the observed distortions are specific to the hand or can be also detected with objects (star, box, rake, circle). Participants judged the location in external space of predefined landmarks on the hand and objects. We compared the actual and estimated horizontal and vertical distances between landmarks. Our results replicate previously reported significant underestimations of the finger length (vertical axis). There was no significant overestimation of the hand width. In the case of objects, we found a significant underestimation along the vertical axis for all objects (p<0.01), which was smaller than for the hand (p<0.05). There was no significant distortion along the horizontal axis for the star. We observed significant horizontal underestimations for the circle and the box, and a significant overestimation for the rake (p<0.05). In summary, distortions along the vertical axis also occur for objects. However, the size of the vertical distortion was larger for the hand than for the objects
