13,022 research outputs found

    From presence to consciousness through virtual reality

    Get PDF
    Immersive virtual environments can break the deep, everyday connection between where our senses tell us we are and where we are actually located and whom we are with. The concept of 'presence' refers to the phenomenon of behaving and feeling as if we are in the virtual world created by computer displays. In this article, we argue that presence is worthy of study by neuroscientists, and that it might aid the study of perception and consciousness

    I know it is not real (and that matters):Media awareness vs. presence shape the VR experience

    Get PDF
    Inspired by the widely recognized idea that in VR/XR, not only presence but also encountered plausibility is relevant (Slater, 2009), we propose a general psychological parallel processing account to explain users' VR and XR experience. The model adopts a broad psychological view by building on interdisciplinary literature on the dualistic nature of perceiving and experiencing (mediated) representations. It proposes that perceptual sensations like presence are paralleled by users' belief that "this is not really happening", which we refer to as media awareness. We review the developmental underpinnings of basic media awareness, and argue that it is triggered in users’ conscious exposure to VR/XR. During exposure the salience of media awareness can vary dynamically due to factors like encountered sensory and semantic (in)consistencies. Our account sketches media awareness and presence as two parallel processes that together define a situation as a media exposure situation. We also review potential joint effects on subsequent psychological and behavioral responses that characterize the user experience in VR/XR. We conclude the article with a programmatic outlook on testable assumptions and open questions for future research

    From extinction learning to anxiety treatment: mind the gap

    Get PDF
    Laboratory models of extinction learning in animals and humans have the potential to illuminate methods for improving clinical treatment of fear-based clinical disorders. However, such translational research often neglects important differences between threat responses in animals and fear learning in humans, particularly as it relates to the treatment of clinical disorders. Specifically, the conscious experience of fear and anxiety, along with the capacity to deliberately engage top-down cognitive processes to modulate that experience, involves distinct brain circuitry and is measured and manipulated using different methods than typically used in laboratory research. This paper will identify how translational research that investigates methods of enhancing extinction learning can more effectively model such elements of human fear learning, and how doing so will enhance the relevance of this research to the treatment of fear-based psychological disorders.Published versio

    Multimodality in {VR}: {A} Survey

    Get PDF
    Virtual reality has the potential to change the way we create and consume content in our everyday life. Entertainment, training, design and manufacturing, communication, or advertising are all applications that already benefit from this new medium reaching consumer level. VR is inherently different from traditional media: it offers a more immersive experience, and has the ability to elicit a sense of presence through the place and plausibility illusions. It also gives the user unprecedented capabilities to explore their environment, in contrast with traditional media. In VR, like in the real world, users integrate the multimodal sensory information they receive to create a unified perception of the virtual world. Therefore, the sensory cues that are available in a virtual environment can be leveraged to enhance the final experience. This may include increasing realism, or the sense of presence; predicting or guiding the attention of the user through the experience; or increasing their performance if the experience involves the completion of certain tasks. In this state-of-the-art report, we survey the body of work addressing multimodality in virtual reality, its role and benefits in the final user experience. The works here reviewed thus encompass several fields of research, including computer graphics, human computer interaction, or psychology and perception. Additionally, we give an overview of different applications that leverage multimodal input in areas such as medicine, training and education, or entertainment; we include works in which the integration of multiple sensory information yields significant improvements, demonstrating how multimodality can play a fundamental role in the way VR systems are designed, and VR experiences created and consumed

    Multimodality in VR: A survey

    Get PDF
    Virtual reality (VR) is rapidly growing, with the potential to change the way we create and consume content. In VR, users integrate multimodal sensory information they receive, to create a unified perception of the virtual world. In this survey, we review the body of work addressing multimodality in VR, and its role and benefits in user experience, together with different applications that leverage multimodality in many disciplines. These works thus encompass several fields of research, and demonstrate that multimodality plays a fundamental role in VR; enhancing the experience, improving overall performance, and yielding unprecedented abilities in skill and knowledge transfer

    Expectations and Beliefs in Immersive Virtual Reality Environments: Managing of Body Perception

    Get PDF
    Real and Perceived Feet Orientation Under Fatiguing and Non-Fatiguing Conditions in an Immersive Virtual Reality Environment ABSTRACT Lower limbs position sense is a complex yet poorly understood mechanism, influenced by many factors. Hence, we investigated the position sense of lower limbs through feet orientation with the use of Immersive Virtual Reality (IVR). Participants had to indicate how they perceived the real 1050 orientation of their feet by orientating a virtual representation of the feet that was shown in an IVR 1051 scenario. We calculated the angle between the two virtual feet (α-VR) after a high-knee step-in-1052 place task. Simultaneously, we recorded the real angle between the two feet (α-R) (T1). Hence, we 1053 assessed if the acute fatigue impacted the position sense. The same procedure was repeated after 1054 inducing muscle fatigue (T2) and after 10 minutes from T2 (T3). Finally, we also recorded the time 1055 needed to confirm the perceived position before and after the acute fatigue protocol. Thirty healthy 1056 adults (27.5 ± 3.8: 57% female, 43% male) were immersed in an IVR scenario with a representation 1057 of two feet. We found a mean difference between α-VR and α-R of 20.89° [95% CI: 14.67°, 27.10°] 1058 in T1, 16.76° [9.57°, 23.94°] in T2, and 16.34° [10.00°, 22.68°] in T3. Participants spent 12.59, 17.50 1059 and 17.95 seconds confirming the perceived position of their feet at T1, T2, T3, respectively. 1060 Participants indicated their feet as forwarding parallel though divergent, showing a mismatch in the 1061 perceived position of feet. Fatigue seemed not to have an impact on position sense but delayed the 1062 time to accomplish this task.The Effect of Context on Eye-Height Estimation in Immersive Virtual Reality: a Cross-Sectional Study ABSTRACT Eye-height spatial perception provides a reference to scale the surrounding environment. It is the result of the integration of visual and postural information. When these stimuli are discordant, the perceived spatial parameters are distorted. Previous studies in immersive virtual reality (IVR) showed that spatial perception is influenced by the visual context of the environment. Hence, this study explored how manipulating the context in IVR affects individuals’ eye-height estimation. Two groups of twenty participants each were immersed in two different IVR environments, represented by a closed room (Wall - W) and an open field (No Wall - NW). Under these two different conditions, participants had to adjust their virtual perspective, estimating their eye height. We calculated the perceived visual offset as the difference between virtual and real eye height, to assess whether the scenarios and the presence of virtual shoes (Feet, No Feet) influenced participants’ estimates at three initial offsets (+100 cm, +0 cm, -100 cm). We found a mean difference between the visual 1679 offsets registered in those trials that started with 100 cm and 0 cm offsets (17.24 cm [8.78; 25.69]) 1680 and between 100 cm and -100 cm offsets (22.35 cm [15.65; 29.05]). Furthermore, a noticeable mean difference was found between the visual offsets recorded in group W, depending on the presence or absence of the virtual shoes (Feet VS No Feet: -6.12 [-10.29, -1.95]). These findings describe that different contexts influenced eye-height perception.Positive Expectations led to Motor Improvement: an Immersive Virtual Reality Pilot Study ABSTRACT This pilot study tested the feasibility of an experimental protocol that evaluated the effect of different positive expectations (verbal and visual-haptic) on anterior trunk flexion. Thirty-six participants were assigned to 3 groups (G0, G+ and G++) that received a sham manoeuvre while immersed in Immersive Virtual Reality (IVR). In G0, the manouvre was paired with by neutral verbal statement. In G+ and G++ the manouvre was paired with a positive verbal statement, but only G++ received a visual-haptic illusion. The illusion consisted of lifting a movable tile placed in front of the participants, using its height to raise the floor level in virtual reality. In this way, participants experienced the perception of touching the floor, through the tactile and the virtual visual afference. The distance between fingertips and the floor was measured before, immediately after, and after 5 minutes from the different manouvres. A major difference in anterior trunk flexion was found for G++ compared to the other groups, although it was only significant compared to G0. This result highlighted the feasibility of the present study for future research on people with mobility limitations (e.g., low back pain or kinesiophobia) and the potential role of a visual-haptic illusion in modifying the performance of trunk flexion

    Virtual Distance Estimation in a CAVE

    Get PDF
    Past studies have shown consistent underestimation of distances in virtual reality, though the exact causes remain unclear. Many virtual distance cues have been investigated, but past work has failed to account for the possible addition of cues from the physical environment. We describe two studies that assess users’ performance and strategies when judging horizontal and vertical distances in a CAVE. Results indicate that users attempt to leverage cues from the physical environment when available and, if allowed, use a locomotion interface to move the virtual viewpoint to facilitate this.FUI in the framework of the Callisto projec
    • …
    corecore