11 research outputs found

    The Effects of Visuomotor Calibration to the Perceived Space and Body, through Embodiment in Immersive Virtual Reality

    Get PDF
    We easily adapt to changes in the environment that involve cross-sensory discrepancies (e.g., between vision and proprioception). Adaptation can lead to changes in motor commands so that the experienced sensory consequences are appropriate for the new environment (e.g., we program a movement differently while wearing prisms that shift our visual space). In addition to these motor changes, perceptual judgments of space can also be altered (e.g., how far can I reach with my arm?). However, in previous studies that assessed perceptual judgments of space after visuomotor adaptation, the manipulation was always a planar spatial shift, whereas changes in body perception could not directly be assessed. In this study, we investigated the effects of velocity-dependent (spatiotemporal) and spatial scaling distortions of arm movements on space and body perception, taking advantage of immersive virtual reality. Exploiting the perceptual illusion of embodiment in an entire virtual body, we endowed subjects with new spatiotemporal or spatial 3D mappings between motor commands and their sensory consequences. The results imply that spatiotemporal manipulation of 2 and 4 times faster can significantly change participants’ proprioceptive judgments of a virtual object’s size without affecting the perceived body ownership, although it did affect the agency of the movements. Equivalent spatial manipulations of 11 and 22 degrees of angular offset also had a significant effect on the perceived virtual object’s size; however, the mismatched information did not affect either the sense of body ownership or agency. We conclude that adaptation to spatial and spatiotemporal distortion can similarly change our perception of space, although spatiotemporal distortions can more easily be detected

    Measuring Cognitive Conflict in Virtual Reality with Feedback-Related Negativity

    Full text link
    As virtual reality (VR) emerges as a mainstream platform, designers have started to experiment new interaction techniques to enhance the user experience. This is a challenging task because designers not only strive to provide designs with good performance but also carefully ensure not to disrupt users' immersive experience. There is a dire need for a new evaluation tool that extends beyond traditional quantitative measurements to assist designers in the design process. We propose an EEG-based experiment framework that evaluates interaction techniques in VR by measuring intentionally elicited cognitive conflict. Through the analysis of the feedback-related negativity (FRN) as well as other quantitative measurements, this framework allows designers to evaluate the effect of the variables of interest. We studied the framework by applying it to the fundamental task of 3D object selection using direct 3D input, i.e. tracked hand in VR. The cognitive conflict is intentionally elicited by manipulating the selection radius of the target object. Our first behavior experiment validated the framework in line with the findings of conflict-induced behavior adjustments like those reported in other classical psychology experiment paradigms. Our second EEG-based experiment examines the effect of the appearance of virtual hands. We found that the amplitude of FRN correlates with the level of realism of the virtual hands, which concurs with the Uncanny Valley theory

    The Bodily Illusion in Adverse Conditions: Virtual Arm Ownership During Visuomotor Mismatch

    Get PDF
    Classically, body ownership illusions are triggered by cross-modal synchronous stimulations, and hampered by multisensory inconsistencies. Nonetheless, the boundaries of such illusions have been proven to be highly plastic. In this immersive virtual reality study, we explored whether it is possible to induce a sense of body ownership over a virtual body part during visuomotor inconsistencies, with or without the aid of concomitant visuo-tactile stimulations. From a first-person perspective, participants watched a virtual tube moving or an avatar’s arm moving, with or without concomitant synchronous visuo-tactile stimulations on their hand. Three different virtual arm/tube speeds were also investigated, while all participants kept their real arms still. The subjective reports show that synchronous visuo-tactile stimulations effectively counteract the effect of visuomotor inconsistencies, but at slow arm movements, a feeling of body ownership might be successfully induced even without concomitant multisensory correspondences. Possible therapeutical implications of these findings are discussed

    A Virtual Reality Application of the Rubber Hand Illusion Induced by Ultrasonic Mid-Air Haptic Stimulation

    Get PDF
    Ultrasonic mid-air haptic technologies, which provide haptic feedback through airwaves produced using ultrasound, could be employed to investigate the sense of body ownership and immersion in virtual reality (VR) by inducing the virtual hand illusion (VHI). Ultrasonic mid-air haptic perception has solely been investigated for glabrous (hairless) skin, which has higher tactile sensitivity than hairy skin. In contrast, the VHI paradigm typically targets hairy skin without comparisons to glabrous skin. The aim of this article was to investigate illusory body ownership, the applicability of ultrasonic mid-air haptics, and perceived immersion in VR using the VHI. Fifty participants viewed a virtual hand being stroked by a feather synchronously and asynchronously with the ultrasonic stimulation applied to the glabrous skin on the palmar surface and the hairy skin on the dorsal surface of their hands. Questionnaire responses revealed that synchronous stimulation induced a stronger VHI than asynchronous stimulation. In synchronous conditions, the VHI was stronger for palmar stimulation than dorsal stimulation. The ultrasonic stimulation was also perceived as more intense on the palmar surface compared to the dorsal surface. Perceived immersion was not related to illusory body ownership per se but was enhanced by the provision of synchronous stimulation

    Examining the Effects of Altered Avatars on Perception-Action in Virtual Reality

    Get PDF
    In virtual reality avatars are animated graphical representation of a person embedded in a virtual environment. Previous research has illustrated the benefits of having an avatar when perceiving aspects of virtual reality. We studied the effect that a non-faithful, or altered, avatar had on the perception of one\u27s action capabilities in VR. In Experiment 1, one group of participants acted with a normal, or faithful, avatar and the other group of participants used an avatar with an extended arm, all in virtual reality. In Experiment 2, the same methodology and procedure was used as in Experiment 1, except only the calibration phase occurred in VR, while the remaining reaches were completed in the real world. All participants performed reaches to various distances. The results of these studies show that calibration to altered dimensions of avatars is possible after receiving feedback while acting with the altered avatar. Further, calibration occurred more quickly when feedback was initially used to transition from a normal avatar to an altered avatar than when later transitioning from the altered avatar arm back to the normal avatar arm without feedback. The implications of these findings for training in virtual reality simulations and transfer back to the real world are also discussed

    Embodiment Sensitivity to Movement Distortion and Perspective Taking in Virtual Reality

    Get PDF
    Despite recent technological improvements of immersive technologies, Virtual Reality suffers from severe intrinsic limitations, in particular the immateriality of the visible 3D environment. Typically, any simulation and manipulation in a cluttered environment would ideally require providing feedback of collisions to every body parts (arms, legs, trunk, etc.) and not only to the hands as has been originally explored with haptic feedback. This thesis addresses these limitations by relying on a cross modal perception and cognitive approach instead of haptic or force feedback. We base our design on scientific knowledge of bodily self-consciousness and embodiment. It is known that the instantaneous experience of embodiment emerges from the coherent multisensory integration of bodily signals taking place in the brain, and that altering this mechanism can temporarily change how one perceives properties of their own body. This mechanism is at stake during a VR simulation, and this thesis explores the new venues of interaction design based on these fundamental scientific findings about the embodied self. In particular, we explore the use of third person perspective (3PP) instead of permanently offering the traditional first person perspective (1PP), and we manipulate the user-avatar motor mapping to achieve a broader range of interactions while maintaining embodiment. We are guided by two principles, to explore the extent to which we can enhance VR interaction through the manipulation of bodily aspects, and to identify the extent to which a given manipulation affects the embodiment of a virtual body. Our results provide new evidence supporting strong embodiment of a virtual body even when viewed from 3PP, and in particular that voluntarily alternating point of view between 1PP and 3PP is not detrimental to the experience of ownership over the virtual body. Moreover, detailed analysis of movement quality show highly similar reaching behavior in both perspective conditions, and only obvious advantages or disadvantages of each perspective depending on the situation (e.g. occlusion of target by the body in 3PP, limited field of view in 1PP). We also show that subjects are insensitive to visuo-proprioceptive movement distortions when the nature of the distortion was not made explicit, and that subjects are biased toward self-attributing distorted movements that make the task easier

    Progettazione, realizzazione e testing di un ambiente di realtà virtuale immersiva per l’apprendimento

    Get PDF
    This thesis presents the results of the research activities carried out during the PhD course in Philosophy, Epistemology and History of Culture at the University of Cagliari. The doctorate is part of the "Innovative Doctorates with industrial characterization" (PON-RI - XXXIII cycle), aimed at the promotion and strengthening of higher education in line with the needs of the national production system and with the National Strategy of Intelligent Specialization 2014-2020 approved by the European Commission. The research project mainly concerned the study of perception-action cycles in immersive virtual reality (VR) environments in order to understand how to better design virtual environments for learning and teaching, testing the collaboration between psychology, education and computer science. During the doctoral period, through the close collaboration between the research group of the Department of Pedagogy, Psychology and Philosophy of the University of Cagliari, the Lawrence Technological University of Detroit and the Software House "Infora", it was possible to design and develop a laboratory for immersive virtual reality called VirtuaLab (VLab), with the aim of analysing the impact on procedural learning of different combinations of structural and functional characteristics and implementable scenarios. The thesis is structured in three parts. The first part examines the theoretical aspects related to the use of virtual reality in areas ranging from learning to health professions, including entertainment and industrial training: the first chapter is dedicated to the definition of the concept of virtual reality and a detailed description of the different types of virtual reality available to the broad public. The following chapter contains a reasoned bibliographic review of the studies that have examined the different applications and objectives that virtual reality has encountered over time, regarding virtual environments dedicated to learning and professional training. The second part introduces the VirtuaLab software (VLab) which was developed for the thesis work and its main characteristics. The third part of the thesis presents two experiments carried out with the designed software: starting with the presentation of the first experiment carried out with a first version of the VLab software, Virtual Kitchen 1.0, focused on the analysis of the effects of different approaches in the presentation and execution of a procedure (implicit feedback VS explicit feedback), in order to evaluate its effectiveness in relation to procedural learning in immersive virtual reality environments. In the next chapter the main changes to which the VLab environment has been subjected are exposed thanks to the analysis carried out on the user experience of the subjects participating in the experiment with VK 1.0, and subsequently a second experiment is presented realized with the modified VLab software (Virtual Kitchen 2.0 or VK 2.0), aimed to verify the effects on performance and learning of two different tutorial modes (textual VS visual), to test the effect on the procedural learning in immersive VR environments. At the end of the thesis work, the results of the experiments are presented and possible future developments of the VirtuaLab software are discussed
    corecore