71 research outputs found

    Perspectival structure and vestibular processing : a commentary on Bigna Lenggenhager & Christophe Lopez

    Get PDF
    I begin by contrasting a taxonomic approach to the vestibular system with the structural approach I take in the bulk of this commentary. I provide an analysis of perspectival structure. Employing that analysis and following the structural approach, I propose three lines of empirical investigation to selectively manipulate and measure vestibular processing and perspectival structure. The hope is that this serves to indicate how interdisciplinary research on vestibular processing might advance our understanding of the structural features of conscious experience

    The concept of a structural affordance

    Get PDF
    I provide an analysis of the concept of an “affordance” that enables one to conceive of “structural affordance” as a kind of affordance relation that might hold between an agent and its body. I then review research in the science of humanoid bodily movement to indicate the empirical reality of structural affordance

    Balancing passive and active learning

    Get PDF

    Eric Schwitzgebel: Perplexities of Consciousness

    Get PDF

    Pojęcie afordancji strukturalnej

    Get PDF
    Przedstawiam analizę koncepcji “afordancji” umożliwiającej pojęcie „afordancji strukturalnej” jako rodzaju afordancyjnej relacji, która miałaby miejsce pomiędzy agentem a jego ciałem. Następnie dokonuję przeglądu badań nad ruchem ciała humanoidalnego, aby zidentyfikować empiryczną realność afordancji strukturalnej

    Where am I in virtual reality?

    Get PDF
    It is currently not well understood whether people experience themselves to be located in one or more specific part(s) of their body. Virtual reality (VR) is increasingly used as a tool to study aspects of bodily perception and self-consciousness, due to its strong experimental control and ease in manipulating multi-sensory aspects of bodily experience. To investigate where people self-locate in their body within virtual reality, we asked participants to point directly at themselves with a virtual pointer, in a VR headset. In previous work employing a physical pointer, participants mainly located themselves in the upper face and upper torso. In this study, using a VR headset, participants mainly located themselves in the upper face. In an additional body template task where participants pointed at themselves on a picture of a simple body outline, participants pointed most often to the upper torso, followed by the (upper) face. These results raise the question as to whether head-mounted virtual reality might alter where people locate themselves making them more “head-centred”

    Self and body part localization in virtual reality: comparing a headset and a large-screen immersive display

    Get PDF
    It is currently not fully understood where people precisely locate themselves in their bodies, particularly in virtual reality. To investigate this, we asked participants to point directly at themselves and to several of their body parts with a virtual pointer, in two virtual reality (VR) setups, a VR headset and a large-screen immersive display (LSID). There was a difference in distance error in pointing to body parts depending on VR setup. Participants pointed relatively accurately to many of their body parts (i.e. eyes, nose, chin, shoulders and waist). However, in both VR setups when pointing to the feet and the knees they pointed too low, and for the top of the head too high (to larger extents in the VR headset). Taking these distortions into account, the locations found for pointing to self were considered in terms of perceived bodies, based on where the participants had pointed to their body parts in the two VR setups. Pointing to self in terms of the perceived body was mostly to the face, the upper followed by the lower, as well as some to the torso regions. There was no significant overall effect of VR condition for pointing to self in terms of the perceived body (but there was a significant effect of VR if only the physical body (as measured) was considered). In a paper-and-pencil task outside of VR, performed by pointing on a picture of a simple body outline (body template task), participants pointed most to the upper torso. Possible explanations for the differences between pointing to self in the VR setups and the body template task are discussed. The main finding of this study is that the VR setup influences where people point to their body parts, but not to themselves, when perceived and not physical body parts are considered

    The influence of the viewpoint in a self-avatar on body part and self-localization

    Get PDF
    The goal of this study is to determine how a self-avatar in virtual reality, experienced from different viewpoints on the body (at eye- or chest-height), might influence body part localization, as well as self-localization within the body. Previous literature shows that people do not locate themselves in only one location, but rather primarily in the face and the upper torso. Therefore, we aimed to determine if manipulating the viewpoint to either the height of the eyes or to the height of the chest would influence self-location estimates towards these commonly identified locations of self. In a virtual reality (VR) headset, participants were asked to point at sev- eral of their body parts (body part localization) as well as "directly at you" (self-localization) with a virtual pointer. Both pointing tasks were performed before and after a self-avatar adaptation phase where participants explored a co-located, scaled, gender-matched, and animated self-avatar. We hypothesized that experiencing a self-avatar might reduce inaccuracies in body part localization, and that viewpoint would influence pointing responses for both body part and self-localization. Participants overall pointed relatively accurately to some of their body parts (shoulders, chin, and eyes), but very inaccurately to others, with large undershooting for the hips, knees, and feet, and large overshooting for the top of the head. Self-localization was spread across the body (as well as above the head) with the following distribution: the upper face (25%), the up- per torso (25%), above the head (15%) and below the torso (12%). We only found an influence of viewpoint (eye- vs chest-height) during the self-avatar adaptation phase for body part localization and not for self-localization. The overall change in error distance for body part localization for the viewpoint at eye-height was small (M = –2.8 cm), while the overall change in error distance for the viewpoint at chest-height was significantly larger, and in the upwards direction relative to the body parts (M = 21.1 cm). In a post-questionnaire, there was no significant difference in embodiment scores between the viewpoint conditions. Most interestingly, having a self-avatar did not change the results on the self-localization pointing task, even with a novel viewpoint (chest-height). Possibly, body-based cues, or memory, ground the self when in VR. However, the present results caution the use of altered viewpoints in applications where veridical position sense of body parts is required
    • …
    corecore