49 research outputs found

    Investigating Embodied Interaction in Near-Field Perception-Action Re-Calibration on Performance in Immersive Virtual Environments

    Get PDF
    Immersive Virtual Environments (IVEs) are becoming more accessible and more widely utilized for training. Previous research has shown that the matching of visual and proprioceptive information is important for calibration. Many state-of-the art Virtual Reality (VR) systems, commonly known as Immersive Virtual Environments (IVE), are created for training users in tasks that require accurate manual dexterity. Unfortunately, these systems can suffer from technical limitations that may force de-coupling of visual and proprioceptive information due to interference, latency, and tracking error. It has also been suggested that closed-loop feedback of travel and locomotion in an IVE can overcome compression of visually perceived depth in medium field distances in the virtual world [33, 47]. Very few experiments have examined the carryover effects of multi-sensory feedback in IVEs during manual dexterous 3D user interaction in overcoming distortions in near-field or interaction space depth perception, and the relative importance of visual and proprioceptive information in calibrating users\u27 distance judgments. In the first part of this work, we examined the recalibration of movements when the visually reached distance is scaled differently than the physically reached distance. We present an empirical evaluation of how visually distorted movements affects users\u27 reach to near field targets in an IVE. In a between subjects design, participants provided manual reaching distance estimates during three sessions; a baseline measure without feedback (open-loop distance estimation), a calibration session with visual and proprioceptive feedback (closed-loop distance estimation), and a post-interaction session without feedback (open-loop distance estimation). Subjects were randomly assigned to one of three visual feedbacks in the closed-loop condition during which they reached to target while holding a tracked stylus: i) Minus condition (-20% gain condition) in which the visual stylus appeared at 80\% of the distance of the physical stylus, ii) Neutral condition (0% or no gain condition) in which the visual stylus was co-located with the physical stylus, and iii) Plus condition (+20% gain condition) in which the visual stylus appeared at 120% of the distance of the physical stylus. In all the conditions, there is evidence of visuo-motor calibration in that users\u27 accuracy in physically reaching to the target locations improved over trials. Scaled visual feedback was shown to calibrate distance judgments within an IVE, with estimates being farthest in the post-interaction session after calibrating to visual information appearing nearer (Minus condition), and nearest after calibrating to visual information appearing further (Plus condition). The same pattern was observed during closed-loop physical reach responses, participants generally tended to physically reach farther in Minus condition and closer in Plus condition to the perceived location of the targets, as compared to Neutral condition in which participants\u27 physical reach was more accurate to the perceived location of the target. We then characterized the properties of human reach motion in the presence or absence of visuo-haptic feedback in real and IVEs within a participant\u27s maximum arm reach. Our goal is to understand how physical reaching actions to the perceived location of targets in the presence or absence of visuo-haptic feedback are different between real and virtual viewing conditions. Typically, participants reach to the perceived location of objects in the 3D environment to perform selection and manipulation actions during 3D interaction in applications such as virtual assembly or rehabilitation. In these tasks, participants typically have distorted perceptual information in the IVE as compared to the real world, in part due to technological limitations such as minimal visual field of view, resolution, latency and jitter. In an empirical evaluation, we asked the following questions; i) how do the perceptual differences between virtual and real world affect our ability to accurately reach to the locations of 3D objects, and ii) how do the motor responses of participants differ between the presence or absence of visual and haptic feedback? We examined factors such as velocity and distance of physical reaching behavior between the real world and IVE, both in the presence or absence of visuo-haptic information. The results suggest that physical reach responses vary systematically between real and virtual environments especially in situations involving presence or absence of visuo-haptic feedback. The implications of our study provide a methodological framework for the analysis of reaching motions for selection and manipulation with novel 3D interaction metaphors and to successfully characterize visuo-haptic versus non-visuo-haptic physical reaches in virtual and real world situations. While research has demonstrated that self-avatars can enhance ones\u27 sense of presence and improve distance perception, the effects of self-avatar fidelity on near field distance estimations has yet to be investigated. Thus, we investigated the effect of visual fidelity of the self-avatar in enhancing the user\u27s depth judgments, reach boundary perception and properties of physical reach motion. Previous research has demonstrated that self-avatar representation of the user enhances the sense of presence [37] and even a static notion of an avatar can improve distance estimation in far distances [59, 48]. In this study, performance with a virtual avatar was also compared to real-world performance. Three levels of fidelity were tested; 1) an immersive self-avatar with realistic limbs, 2) a low-fidelity self-avatar showing only joint locations, and 3) end-effector only. There were four primary hypotheses; First, we hypothesize that just the existence of self-avatar or end-effector position would calibrate users\u27 interaction space depth perception in an IVE. Therefore, participants\u27 distance judgments would be improved after the calibration phase regardless of self-avatars\u27 visual fidelity. Second, the magnitude of the changes from pre-test to post-test would be significantly different based on the visual details of the self-avatar presented to the participants (self-avatar vs low-fidelity self-avatar and end-effector). Third, we predict distance estimation accuracy would be the highest in immersive self-avatar condition and the lowest in end-effector condition. Forth, we predict that the properties of physical reach responses vary systematically between different visual fidelity conditions. The results suggest that reach estimations become more accurate as the visual fidelity of the avatar increases, with accuracy for high fidelity avatars approaching real-world performance as compared to low-fidelity and end-effector conditions. There was also an effect of the phase where the reach estimate became more accurate after receiving feedback in calibration phase. Overall, in all conditions reach estimations became more accurate after receiving feedback during a calibration phase. Lastly, we examined factors such as path length, time to complete the task, average velocity and acceleration of physical reach motion and compared all the IVEs conditions with real-world. The results suggest that physical reach responses vary systematically between the VR viewing conditions and real-world

    Exploring Visuo-haptic Feedback Congruency in Virtual Reality

    Get PDF
    Visuo-haptic feedback is an important aspect of virtual reality experiences, with several previous works investigating its benefits and effects. A key aspect of this domain is congruency of crossmodal feedback and how it affects users. However, an important sub-domain which has received surprisingly little focus is visuo-haptic congruency in an interactive multisensory setting. This is especially important given that multisensory integration is crucial to player immersion in the context of virtual reality video games. In this paper, we attempt to address this lack of research. To achieve this, a total of 50 participants played a virtual reality racing game with either congruent or incongruent visuo-haptic feedback. Specifically, these users engaged in a driving simulator with physical gear shift interfaces, with one treatment group using a stick-shift gearbox, and the other using a paddle-shift setup. The virtual car they drove (A Formula Rookie race car) was only visually congruent with the stick-shift setup. A motion simulator was also used to provide synchronous vestibular cues and diversify the range of modalities in multisensory integration. The racing simulator used was Project CARS 2, one of the world’s most popular commercial racing simulators. Our findings showed no significant differences between the groups in measures of user presence or in-game performance, counter to previous work regarding visuo-haptic congruency. However, the Selfevaluation of Performance PQ subscale was notably close to significance. Our results can be used to better inform games and simulation developers, especially those targeting virtual reality

    Substitutional reality:using the physical environment to design virtual reality experiences

    Get PDF
    Experiencing Virtual Reality in domestic and other uncontrolled settings is challenging due to the presence of physical objects and furniture that are not usually defined in the Virtual Environment. To address this challenge, we explore the concept of Substitutional Reality in the context of Virtual Reality: a class of Virtual Environments where every physical object surrounding a user is paired, with some degree of discrepancy, to a virtual counterpart. We present a model of potential substitutions and validate it in two user studies. In the first study we investigated factors that affect participants' suspension of disbelief and ease of use. We systematically altered the virtual representation of a physical object and recorded responses from 20 participants. The second study investigated users' levels of engagement as the physical proxy for a virtual object varied. From the results, we derive a set of guidelines for the design of future Substitutional Reality experiences

    Haptic Interface to Interact with a Virtual Vehicle Cockpit

    Get PDF
    This paper presents a device for interacting with a virtual cockpit of a vehicle with tactile feedback while driving. The design of this interface takes into account the criteria for virtual reality not inducing cognitive overload resulting from the complexity of use and the invasiveness of the device. Especial attention is taken to be as accurate as possible in the interaction with the elements of the cockpit while ensuring high quality tactile feedback to the user as well as a good perception of the elements of the cockpit, and to minimize task execution errors, disturbances on the driver currently driving and the device latency. User experiments show the effectiveness of our device through visuo-haptic feedback

    The learning curves of a validated virtual reality hip arthroscopy simulator

    Get PDF
    Abstract: Introduction: Decreases in trainees’ working hours, coupled with evidence of worse outcomes when hip arthroscopies are performed by inexperienced surgeons, mandate an additional means of training. Though virtual reality simulation has been adopted by other surgical specialities, its slow uptake in arthroscopic training is due to a lack of evidence as to its benefits. These benefits can be demonstrated through learning curves associated with simulator training—with practice reflecting increases in validated performance metrics. Methods: Twenty-five medical students with no previous experience of hip arthroscopy completed seven weekly simulated arthroscopies of a healthy virtual hip joint using a 70° arthroscope in the supine position. Twelve targets were visualised within the central compartment, six via the anterior portal, three via the anterolateral portal and three via the posterolateral portal. Task duration, number of collisions (bone and soft-tissue), and distance travelled by arthroscope were measured by the simulator for every session of each student. Results: Learning curves were demonstrated by the students, with improvements in time taken, number of collisions (bone and soft-tissue), collision length and efficiency of movement (all p < 0.01). Improvements in time taken, efficiency of movement and number of collisions with soft-tissue were first seen in session 3 and improvements in all other parameters were seen in session 4. No differences were found after session 5 for time taken and length of soft-tissue collision. No differences in number of collisions (bone and soft-tissue), length of collisions with bone, and efficiency of movement were found after session 6. Conclusions: The results of this study demonstrate learning curves for a hip arthroscopy simulator, with significant improvements seen after three sessions. All performance metrics were found to improved, demonstrating sufficient visuo-haptic consistency within the virtual environment, enabling individuals to develop basic arthroscopic skills

    Bimanual Motor Strategies and Handedness Role During Human-Exoskeleton Haptic Interaction

    Full text link
    Bimanual object manipulation involves multiple visuo-haptic sensory feedbacks arising from the interaction with the environment that are managed from the central nervous system and consequently translated in motor commands. Kinematic strategies that occur during bimanual coupled tasks are still a scientific debate despite modern advances in haptics and robotics. Current technologies may have the potential to provide realistic scenarios involving the entire upper limb extremities during multi-joint movements but are not yet exploited to their full potential. The present study explores how hands dynamically interact when manipulating a shared object through the use of two impedance-controlled exoskeletons programmed to simulate bimanually coupled manipulation of virtual objects. We enrolled twenty-six participants (2 groups: right-handed and left-handed) who were requested to use both hands to grab simulated objects across the robot workspace and place them in specific locations. The virtual objects were rendered with different dynamic proprieties and textures influencing the manipulation strategies to complete the tasks. Results revealed that the roles of hands are related to the movement direction, the haptic features, and the handedness preference. Outcomes suggested that the haptic feedback affects bimanual strategies depending on the movement direction. However, left-handers show better control of the force applied between the two hands, probably due to environmental pressures for right-handed manipulations

    Reducing Visuospatial Pseudoneglect in Healthy Subjects by Active Video Gaming

    Get PDF
    Pseudoneglect phenomenon refers to a condition in which healthy subjects tend to perceive the left side of exactly bisected lines as being slightly longer than the right one. However, behavioural data showed that athletes practising an open-skill sport display less pseudoneglect than the general population. Given the fact that so-called exergames (also known as active video games) are platforms designed to fully mimic sport activity, this work intends to investigate whether and how a one-week training period of exergame open-skill sport can determine a similar decrease in pseudoneglect. Fifteen healthy participants (non-athletes) responded to a visuospatial attention task and a control memory task in basal conditions (t0: Pre-game) and after a short period (one week, one hour/day) of tennis exergaming (t1: Post-game). In the Post-game condition, subjects from this experimental group (ExerGame group: EG) reduced leftward space overestimation and made significantly fewer leftward errors compared to the Pre-game condition. Additionally, two other experimental groups were employed: one evaluated within the same conditions of the main experiment but using a non-exergame (Non-Exergame groups: NEG) and the other one without any video game stimulus (Sedentary group: SE). Our findings suggest that daily training of a tennis exergame seems to be able to improve visuospatial attention isotropy by reducing leftward space overestimation, whereas outcomes from non-exergaming and sedentary activity do not modify subjects’ performance
    corecore