1,130 research outputs found

    The influence of the viewpoint in a self-avatar on body part and self-localization

    Get PDF
    The goal of this study is to determine how a self-avatar in virtual reality, experienced from different viewpoints on the body (at eye- or chest-height), might influence body part localization, as well as self-localization within the body. Previous literature shows that people do not locate themselves in only one location, but rather primarily in the face and the upper torso. Therefore, we aimed to determine if manipulating the viewpoint to either the height of the eyes or to the height of the chest would influence self-location estimates towards these commonly identified locations of self. In a virtual reality (VR) headset, participants were asked to point at sev- eral of their body parts (body part localization) as well as "directly at you" (self-localization) with a virtual pointer. Both pointing tasks were performed before and after a self-avatar adaptation phase where participants explored a co-located, scaled, gender-matched, and animated self-avatar. We hypothesized that experiencing a self-avatar might reduce inaccuracies in body part localization, and that viewpoint would influence pointing responses for both body part and self-localization. Participants overall pointed relatively accurately to some of their body parts (shoulders, chin, and eyes), but very inaccurately to others, with large undershooting for the hips, knees, and feet, and large overshooting for the top of the head. Self-localization was spread across the body (as well as above the head) with the following distribution: the upper face (25%), the up- per torso (25%), above the head (15%) and below the torso (12%). We only found an influence of viewpoint (eye- vs chest-height) during the self-avatar adaptation phase for body part localization and not for self-localization. The overall change in error distance for body part localization for the viewpoint at eye-height was small (M = –2.8 cm), while the overall change in error distance for the viewpoint at chest-height was significantly larger, and in the upwards direction relative to the body parts (M = 21.1 cm). In a post-questionnaire, there was no significant difference in embodiment scores between the viewpoint conditions. Most interestingly, having a self-avatar did not change the results on the self-localization pointing task, even with a novel viewpoint (chest-height). Possibly, body-based cues, or memory, ground the self when in VR. However, the present results caution the use of altered viewpoints in applications where veridical position sense of body parts is required

    Malleability of the self: electrophysiological correlates of the enfacement illusion

    Get PDF
    Self-face representation is fundamentally important for self-identity and self-consciousness. Given its role in preserving identity over time, self-face processing is considered as a robust and stable process. Yet, recent studies indicate that simple psychophysics manipulations may change how we process our own face. Specifically, experiencing tactile facial stimulation while seeing similar synchronous stimuli delivered to the face of another individual seen as in a mirror, induces 'enfacement' illusion, i.e. the subjective experience of ownership of the other’s face and a bias in attributing to the self, facial features of the other person. Here we recorded visual Event-Related Potentials elicited by the presentation of self, other and morphed faces during a self-other discrimination task performed immediately after participants received synchronous and control asynchronous Interpersonal Multisensory Stimulation (IMS). We found that self-face presentation after synchronous as compared to asynchronous stimulation significantly reduced the late positive potential (LPP; 450-750 ms), a reliable electrophysiological marker of self-identification processes. Additionally, enfacement cancelled out the differences in LPP amplitudes produced by self- and other-face during the control condition. These findings represent the first direct neurophysiological evidence that enfacement may affect self-face processing and pave the way to novel paradigms for exploring defective self-representation and self-other interactions

    Learning to Reconstruct People in Clothing from a Single RGB Camera

    No full text
    We present a learning-based model to infer the personalized 3D shape of people from a few frames (1-8) of a monocular video in which the person is moving, in less than 10 seconds with a reconstruction accuracy of 5mm. Our model learns to predict the parameters of a statistical body model and instance displacements that add clothing and hair to the shape. The model achieves fast and accurate predictions based on two key design choices. First, by predicting shape in a canonical T-pose space, the network learns to encode the images of the person into pose-invariant latent codes, where the information is fused. Second, based on the observation that feed-forward predictions are fast but do not always align with the input images, we predict using both, bottom-up and top-down streams (one per view) allowing information to flow in both directions. Learning relies only on synthetic 3D data. Once learned, the model can take a variable number of frames as input, and is able to reconstruct shapes even from a single image with an accuracy of 6mm. Results on 3 different datasets demonstrate the efficacy and accuracy of our approach

    Phenomenal regression to the real object in physical and virtual worlds

    Get PDF
    © 2014, Springer-Verlag London. In this paper, we investigate a new approach to comparing physical and virtual size and depth percepts that captures the involuntary responses of participants to different stimuli in their field of view, rather than relying on their skill at judging size, reaching or directed walking. We show, via an effect first observed in the 1930s, that participants asked to equate the perspective projections of disc objects at different distances make a systematic error that is both individual in its extent and comparable in the particular physical and virtual setting we have tested. Prior work has shown that this systematic error is difficult to correct, even when participants are knowledgeable of its likelihood of occurring. In fact, in the real world, the error only reduces as the available cues to depth are artificially reduced. This makes the effect we describe a potentially powerful, intrinsic measure of VE quality that ultimately may contribute to our understanding of VE depth compression phenomena

    {SelfPose}: {3D} Egocentric Pose Estimation from a Headset Mounted Camera

    Get PDF
    We present a solution to egocentric 3D body pose estimation from monocular images captured from downward looking fish-eye cameras installed on the rim of a head mounted VR device. This unusual viewpoint leads to images with unique visual appearance, with severe self-occlusions and perspective distortions that result in drastic differences in resolution between lower and upper body. We propose an encoder-decoder architecture with a novel multi-branch decoder designed to account for the varying uncertainty in 2D predictions. The quantitative evaluation, on synthetic and real-world datasets, shows that our strategy leads to substantial improvements in accuracy over state of the art egocentric approaches. To tackle the lack of labelled data we also introduced a large photo-realistic synthetic dataset. xR-EgoPose offers high quality renderings of people with diverse skintones, body shapes and clothing, performing a range of actions. Our experiments show that the high variability in our new synthetic training corpus leads to good generalization to real world footage and to state of theart results on real world datasets with ground truth. Moreover, an evaluation on the Human3.6M benchmark shows that the performance of our method is on par with top performing approaches on the more classic problem of 3D human pose from a third person viewpoint.Comment: 14 pages. arXiv admin note: substantial text overlap with arXiv:1907.1004

    Gamemunication: prosthetic communication ethnography of game Avatars

    Get PDF
    This study revisits Hymes’ ethnography of communication for game avatars, functioning as a communication nexus connecting games and gamers. Hymes formulates his ethnography of communication into SPEAKING (Settings and Scenes, Participants, Ends, Act Sequences, Keys, Instrumentalities, Norms, and Genres) and this formula deems to be unfit to explain how game avatars communicate. Implementing Klevjer’s prosthetic telepresence (2012) to analyze sixty-two game titles, it is revealed that SPEAKING requires an extension when applied to study game avatars since the formula is not designed to explain the prosthetic nature of game avatars. This prosthetic nature produces specific communication ethnography of avatars, which we dub prosthetic communication ethnography. By prosthetic communication ethnography refers to technical elements of gaming, which contribute to the ways the avatars communicate. As Hymes’ ethnography of communication with SPEAKING, this avatar based communication ethnography requires the same tool of analysis, which we call GAMING (Gaming systems, Attributes, Mechanics, Indexicalities, Narratives, and Geosocial systems), constructed with indexical storytelling by Fernández-Vara (2011), user interface types of games by Stonehouse (2014) and prosthetic video game theory by Jagodzinski (2019) as the theoretical foundations. GAMING and SPEAKING are integrated by bridging them with Aarseth’s ludonarrative dimensions (2012)

    Sonic interactions in virtual environments

    Get PDF
    This book tackles the design of 3D spatial interactions in an audio-centered and audio-first perspective, providing the fundamental notions related to the creation and evaluation of immersive sonic experiences. The key elements that enhance the sensation of place in a virtual environment (VE) are: Immersive audio: the computational aspects of the acoustical-space properties of Virutal Reality (VR) technologies Sonic interaction: the human-computer interplay through auditory feedback in VE VR systems: naturally support multimodal integration, impacting different application domains Sonic Interactions in Virtual Environments will feature state-of-the-art research on real-time auralization, sonic interaction design in VR, quality of the experience in multimodal scenarios, and applications. Contributors and editors include interdisciplinary experts from the fields of computer science, engineering, acoustics, psychology, design, humanities, and beyond. Their mission is to shape an emerging new field of study at the intersection of sonic interaction design and immersive media, embracing an archipelago of existing research spread in different audio communities and to increase among the VR communities, researchers, and practitioners, the awareness of the importance of sonic elements when designing immersive environments

    Sonic Interactions in Virtual Environments

    Get PDF

    Indices for Virtual Service Agent Design: Cross-Cultural Evaluation

    Get PDF
    While localization helps to create websites and mobile apps for specific target markets, not as much attention was devoted to the area of affective virtual service agents. The situation is changing due to advances in affective computing and artificial intelligence. Virtual service agents have the potential to change the way how people interact with information technology by transforming control method from physical gestures to natural language conversation. By having human-like characteristics, the agents can transform impersonal service experience to personal and make an emotional impression on the user or customer. Such message can take different forms and interpretations, depending on national culture and other context. Qualitative data from interviews with experts were used to identify differences in how they are viewed in Sweden and Japan. A survey was then used to quantify the differences using a sample of participants, who were asked to rate the likability and trustworthiness of agents with varying ethnicity, gender and age. The impact of visible visual attributes on their trustworthiness and likability is analysed on a familiar example with virtual service agents at an airport. It was found that each group favours their familiar communication style and recommendations on virtual service agent localization are given
    • 

    corecore