111 research outputs found

    Imagined Self-Motion Differs from Perceived Self-Motion: Evidence from a Novel Continuous Pointing Method

    Get PDF
    Background The extent to which actual movements and imagined movements maintain a shared internal representation has been a matter of much scientific debate. Of the studies examining such questions, few have directly compared actual full-body movements to imagined movements through space. Here we used a novel continuous pointing method to a) provide a more detailed characterization of self-motion perception during actual walking and b) compare the pattern of responding during actual walking to that which occurs during imagined walking. Methodology/Principal Findings This continuous pointing method requires participants to view a target and continuously point towards it as they walk, or imagine walking past it along a straight, forward trajectory. By measuring changes in the pointing direction of the arm, we were able to determine participants' perceived/imagined location at each moment during the trajectory and, hence, perceived/imagined self-velocity during the entire movement. The specific pattern of pointing behaviour that was revealed during sighted walking was also observed during blind walking. Specifically, a peak in arm azimuth velocity was observed upon target passage and a strong correlation was observed between arm azimuth velocity and pointing elevation. Importantly, this characteristic pattern of pointing was not consistently observed during imagined self-motion. Conclusions/Significance Overall, the spatial updating processes that occur during actual self-motion were not evidenced during imagined movement. Because of the rich description of self-motion perception afforded by continuous pointing, this method is expected to have significant implications for several research areas, including those related to motor imagery and spatial cognition and to applied fields for which mental practice techniques are common (e.g. rehabilitation and athletics)

    Depictive and Metric Body Size Estimation in Anorexia Nervosa and Bulimia Nervosa: A Systematic Review and Meta-Analysis.

    Get PDF
    A distorted representation of one's own body is a diagnostic criterion and core psychopathology of both anorexia nervosa (AN) and bulimia nervosa (BN). Despite recent technical advances in research, it is still unknown whether this body image disturbance is characterized by body dissatisfaction and a low ideal weight and/or includes a distorted perception or processing of body size. In this article, we provide an update and meta-analysis of 42 articles summarizing measures and results for body size estimation (BSE) from 926 individuals with AN, 536 individuals with BN and 1920 controls. We replicate findings that individuals with AN and BN overestimate their body size as compared to controls (ES= 0.63). Our meta-regression shows that metric methods (BSE by direct or indirect spatial measures) yield larger effect sizes than depictive methods (BSE by evaluating distorted pictures), and that effect sizes are larger for patients with BN than for patients with AN. To interpret these results, we suggest a revised theoretical framework for BSE that accounts for differences between depictive and metric BSE methods regarding the underlying body representations (conceptual vs. perceptual, implicit vs. explicit). We also discuss clinical implications and argue for the importance of multimethod approaches to investigate body image disturbance

    Owning an overweight or underweight body: distinguishing the physical, experienced and virtual body

    Get PDF
    Our bodies are the most intimately familiar objects we encounter in our perceptual environment. Virtual reality provides a unique method to allow us to experience having a very different body from our own, thereby providing a valuable method to explore the plasticity of body representation. In this paper, we show that women can experience ownership over a whole virtual body that is considerably smaller or larger than their physical body. In order to gain a better understanding of the mechanisms underlying body ownership, we use an embodiment questionnaire, and introduce two new behavioral response measures: an affordance estimation task (indirect measure of body size) and a body size estimation task (direct measure of body size). Interestingly, after viewing the virtual body from first person perspective, both the affordance and the body size estimation tasks indicate a change in the perception of the size of the participant’s experienced body. The change is biased by the size of the virtual body (overweight or underweight). Another novel aspect of our study is that we distinguish between the physical, experienced and virtual bodies, by asking participants to provide affordance and body size estimations for each of the three bodies separately. This methodological point is important for virtual reality experiments investigating body ownership of a virtual body, because it offers a better understanding of which cues (e.g. visual, proprioceptive, memory, or a combination thereof) influence body perception, and whether the impact of these cues can vary between different setups

    Talk to the Virtual Hands: Self-Animated Avatars Improve Communication in Head-Mounted Display Virtual Environments

    Get PDF
    Background When we talk to one another face-to-face, body gestures accompany our speech. Motion tracking technology enables us to include body gestures in avatar-mediated communication, by mapping one's movements onto one's own 3D avatar in real time, so the avatar is self-animated. We conducted two experiments to investigate (a) whether head-mounted display virtual reality is useful for researching the influence of body gestures in communication; and (b) whether body gestures are used to help in communicating the meaning of a word. Participants worked in pairs and played a communication game, where one person had to describe the meanings of words to the other. Principal Findings In experiment 1, participants used significantly more hand gestures and successfully described significantly more words when nonverbal communication was available to both participants (i.e. both describing and guessing avatars were self-animated, compared with both avatars in a static neutral pose). Participants ‘passed’ (gave up describing) significantly more words when they were talking to a static avatar (no nonverbal feedback available). In experiment 2, participants' performance was significantly worse when they were talking to an avatar with a prerecorded listening animation, compared with an avatar animated by their partners' real movements. In both experiments participants used significantly more hand gestures when they played the game in the real world. Conclusions Taken together, the studies show how (a) virtual reality can be used to systematically study the influence of body gestures; (b) it is important that nonverbal communication is bidirectional (real nonverbal feedback in addition to nonverbal communication from the describing participant); and (c) there are differences in the amount of body gestures that participants use with and without the head-mounted display, and we discuss possible explanations for this and ideas for future investigation

    Introduction to Special Issue SAP 2014

    No full text

    Virtual arm's reach influences perceived distances but only after experience reaching

    No full text
    Considerable empirical evidence has shown influences of the action capabilities of the body on the perception of sizes and distances. Generally, as one׳s action capabilities increase, the perception of the relevant distance (over which the action is to be performed) decreases and vice versa. As a consequence, it has been proposed that the body׳s action capabilities act as a perceptual ruler, which is used to measure perceived sizes and distances. In this set of studies, we investigated this hypothesis by assessing the influence of arm׳s reach on the perception of distance. By providing participant with a self-representing avatar seen in a first-person perspective in virtual reality, we were able to introduce novel and completely unfamiliar alterations in the virtual arm׳s reach to evaluate their impact on perceived distance. Using both action-based and visual matching measures, we found that virtual arm׳s reach influenced perceived distance in virtual environments. Due to the participants׳ inexperience with the reach alterations, we also were able to assess the amount of experience with the new arm׳s reach required to influence perceived distance. We found that minimal experience reaching with the virtual arm can influence perceived distance. However, some reaching experience is required. Merely having a long or short virtual arm, even one that is synchronized to one׳s movements, is not enough to influence distance perception if one has no experience reaching

    Multisensory contributions to spatial perception

    No full text
    How do we know where environmental objects are located with respect to our body? How are we are able to navigate, manipulate, and interact with the environment? In this chapter, we describe how capturing sensory signals from the environment and performing internal computations achieve such goals. The first step, called early or low-level processing, is based on the functioning of feature detectors that respond selectively to elementary patterns of stimulation. Separate organs capture sensory signals and then process them separately in what we normally refer to as senses: smell, taste, touch, audition, and vision. In the first section of this chapter, we present the sense modalities that provide sensory information for the perception of spatial properties such as distance, direction, and extent. Although it is hard to distinguish where early processing ends and high-level perception begins, the rest of the chapter focuses on the intermediate level of processing, which is implicitly assumed to be the a key component of several perceptual and computational theories (Gibson, 1979; Marr, 1982) and for the visual modality has been termed mid-level vision (see Nakayama, He, & Shimojo, 1995). In particular, we discuss the ability of the perceptual system to specify the position and orientation of environmental objects relative to other objects and especially relative to the observer’s body. We present computational theories and relevant scientific results on individual sense modalities and on the integration of sensory information within and across the sensory modalities. Finally, in the last section of this chapter, we describe how the information processing approach has enabled a better understanding of the perceptual processes in relation to two specific high-level perceptual functions: self-orientation perception and object recognition

    Multisensory contributions to spatial perception

    No full text
    • …
    corecore