458 research outputs found

    Scene-motion thresholds during head yaw for immersive virtual environments

    Get PDF
    In order to better understand how scene motion is perceived in immersive virtual environments, we measured scene-motion thresholds under different conditions across three experiments. Thresholds were measured during quasi-sinusoidal head yaw, single left-to-right or right-to-left head yaw, different phases of head yaw, slow to fast head yaw, scene motion relative to head yaw, and two scene illumination levels. We found that across various conditions 1) thresholds are greater when the scene moves with head yaw (corresponding to gain 1:0), and 2) thresholds increase as head motion increases

    Characterization of a hybrid tracking system

    Get PDF
    Virtual Reality is a completely immersive computer generated environment which allows for user interaction. A computer and associated tracking peripherals are used to generate a realistic scene and update it based on the position and orientation of the user. VR systems are individual in their ability to provide the user with a 360 degree field of view in all orientations. Since it originated over 30 years ago, virtual reality(VR) has suffered from a problem known as lag. Lag is the system\u27s inability to keep up with the user\u27s actions within a virtual environment. Lag occurs for many reasons. Anything from slow processing speeds of the tracker or the computer to slow data transfer between the computer and tracking peripherals will cause an increase in lag. Even if the tracking peripherals could provide information to the computer generating the virtual environment immediately, lag would still be an issue. It is not until after the computer generating the virtual environment receives information from the tracking peripherals, that the computer begins to generate the environment. On average, it takes approximately 16ms to generate a virtual environment. Under ideal conditions, this would create a 16ms delay. Actual environment generation time is dependent on the speed of the central processing unit or computer generating the environment and varies from system to system. For this reason, it would be advantageous to have a tracking system which could predict the user\u27s actions beforehand. Prediction would allow the system to begin generating a new scene within the environment and display that scene at the appropriate time rather than several milliseconds after the fact. Both inertial and magnetic tracking systems are currently used in VR settings, but neither provides the speed and quality necessary to maintain a realistic experience within the environment. InterSense, a new company in the Boston area, recently released a hybrid tracking system which they claim surpasses the standard magnetic tracker on the market. This system, the IS600, combines inertial and acoustical information to maintain 6 degrees of freedom. The IS600 reports yaw, pitch, and roll, as well as x, y, and z position information. In order to determine the success of this system, it was necessary to characterize the system performance and then integrate it into a virtual environment for perceptual testing. Characterization of the IS600 revealed failures of the system at high and low angular velocities and a random sampling rate. The system\u27s inertial prediction was successful and very effective for smooth motions. Thirteen subjects were tested to determine their preference of prediction within a virtual environment. The subjects were asked to choose between environments generated with 30ms of inertial prediction and environments generated without prediction. Results were not sufficient to conclude that prediction was effective, but this test can not be used as an accurate measure of the system\u27s performance. Other problems, such as the random sampling rate, of the system may be the cause of these inconclusive results. Additional testing will be necessary to determine the effectiveness of the product

    Towards Naturalistic Interfaces of Virtual Reality Systems

    Get PDF
    Interaction plays a key role in achieving realistic experience in virtual reality (VR). Its realization depends on interpreting the intents of human motions to give inputs to VR systems. Thus, understanding human motion from the computational perspective is essential to the design of naturalistic interfaces for VR. This dissertation studied three types of human motions, including locomotion (walking), head motion and hand motion in the context of VR. For locomotion, the dissertation presented a machine learning approach for developing a mechanical repositioning technique based on a 1-D treadmill for interacting with a unique new large-scale projective display, called the Wide-Field Immersive Stereoscopic Environment (WISE). The usability of the proposed approach was assessed through a novel user study that asked participants to pursue a rolling ball at variable speed in a virtual scene. In addition, the dissertation studied the role of stereopsis in avoiding virtual obstacles while walking by asking participants to step over obstacles and gaps under both stereoscopic and non-stereoscopic viewing conditions in VR experiments. In terms of head motion, the dissertation presented a head gesture interface for interaction in VR that recognizes real-time head gestures on head-mounted displays (HMDs) using Cascaded Hidden Markov Models. Two experiments were conducted to evaluate the proposed approach. The first assessed its offline classification performance while the second estimated the latency of the algorithm to recognize head gestures. The dissertation also conducted a user study that investigated the effects of visual and control latency on teleoperation of a quadcopter using head motion tracked by a head-mounted display. As part of the study, a method for objectively estimating the end-to-end latency in HMDs was presented. For hand motion, the dissertation presented an approach that recognizes dynamic hand gestures to implement a hand gesture interface for VR based on a static head gesture recognition algorithm. The proposed algorithm was evaluated offline in terms of its classification performance. A user study was conducted to compare the performance and the usability of the head gesture interface, the hand gesture interface and a conventional gamepad interface for answering Yes/No questions in VR. Overall, the dissertation has two main contributions towards the improvement of naturalism of interaction in VR systems. Firstly, the interaction techniques presented in the dissertation can be directly integrated into existing VR systems offering more choices for interaction to end users of VR technology. Secondly, the results of the user studies of the presented VR interfaces in the dissertation also serve as guidelines to VR researchers and engineers for designing future VR systems

    New VR Navigation Techniques to Reduce Cybersickness

    Get PDF
    In nowadays state of the art VR environments, displayed in CAVEs or HMDs, navigation technics may frequently induce cybersickness or VR-Induced Symptoms and Effects (VRISE), drastically limiting the friendly use of VR environments with no navigation limitations. In two distinct experiments, we investigated acceleration VRISE thresholds for longitudinal and rotational motions and compared 3 different VR systems: 2 CAVEs and a HMD (Oculus Rift DK2). We found that VRISE occur more often and more strongly in case of rotational motions and found no major difference between the CAVEs and the HMD. Based on the obtained thresholds we developed a new "Head Lock" navigation method for rotational motions in a virtual environment in order to generate a “Pseudo AR” mode, keeping fixed visual outside world references. Thanks to a third experiment we have shown that this new metaphor significantly reduces VRISE occurrences and may be a useful base for future natural navigation technics

    CHARACTERISTICS OF HEAD MOUNTED DISPLAYS AND THEIR EFFECTS ON SIMULATOR SICKNESS

    Get PDF
    Characteristics of head-mounted displays (HMDs) and their effects on simulator sickness (SS) and presence were investigated. Update delay and wide field of views (FOV) have often been thought to elicit SS. With the exception of Draper et al. (2001), previous research that has examined FOV has failed to consider image scale factor, or the ratio between physical FOV of the HMD display and the geometric field of view (GFOV) of the virtual environment (VE). The current study investigated update delay, image scale factor, and peripheral vision on SS and presence when viewing a real-world scene. Participants donned an HMD and performed active head movements to search for objects located throughout the laboratory. Seven out of the first 28 participants withdrew from the study due to extreme responses. These participants experienced faint-like symptoms, confusion, ataxia, nausea, and tunnel vision. Thereafter, the use of a hand-rail was implemented to provide participants something to grasp while performing the experimental task. The 2X2X2 ANOVA revealed a main effect of peripheral vision, F(1,72) = 6.90, p= .01, indicating peak Simulator Sickness Questionnaire (SSQ) scores were significantly higher when peripheral vision was occluded than when peripheral vision was included. No main effects or interaction effects were revealed on Presence Questionnaire (PQ version 4.0) scores. However, a significant negative correlation of peak SSQ scores and PQ scores, r(77) = -.28, p= .013 was revealed. Participants also were placed into \u27sick\u27 and \u27not-sick\u27 groups based on a median split of SSQ scores. A chi-square analysis revealed that participants who were exposed to an additional update delay of ~200 ms were significantly more likely to be in the \u27sick\u27 group than those who were exposed to no additional update delay. To reduce the occurrence of SS, a degree of peripheral vision of the external world should be included and attempts to reduce update delay should continue. Furthermore, participants should be provided with something to grasp while in an HMD VE. Future studies should seek to investigate a critical amount of peripheral vision and update delay necessary to elicit SS

    Redirected Scene Rotation for Immersive Movie Experiences

    Get PDF
    Virtual reality (VR) allows for immersive and natural viewing experiences; however, these often expect users to be standing and able to physically turn and move easily. Seated VR applications, specifically immersive 360-degree movies, must be appropriately designed to facilitate user comfort and prevent sickness. Our research explores a scene rotation-based method for redirecting a viewer’s gaze and its effectiveness given two parameter adjustments: rotation speed and delay/angle threshold. The research explores the feasibility and effectiveness of the technique and of variations of the parameter values. The research is important because the results will prove useful in the development of future immersive movie or virtual reality experiences. We conducted a controlled user study to determine how users responded to the scene rotation and which parameter values they preferred. Metrics for effective results are derived from user comfort, sickness, and overall preference. From our study, we discovered that users responded favorably to the scene rotation technique, especially for the slow rotation speed. The results of this research will further the understanding of how to effectively develop content for virtual reality systems

    Perceptual Manipulations for Hiding Image Transformations in Virtual Reality

    Get PDF
    Users of a virtual reality make frequent gaze shifts and head movements to explore their surrounding environment. Saccades are rapid, ballistic, conjugate eye movements that reposition our gaze, and in doing so create large-field motion on our retina. Due to the high speed motion on the retina during saccades, the brain suppresses the visual signals from the eye, a perceptual phenomenon known as the saccadic suppression. These moments of visual blindness can help hide the display graphical updates in a virtual reality. In this dissertation, I investigated how the visibility of various image transformations differed, during combinations of saccade and head rotation conditions. Additionally, I studied how hand and gaze interaction, affected image change discrimination in an inattentional blindness task. I conducted four psychophysical experiments in desktop or head-mounted VR. In the eye tracking studies, users viewed 3D scenes, and were triggered to make a vertical or horizontal saccade. During the saccade an instantaneous translation or rotation was applied to the virtual camera used to render the scene. Participants were required to indicate the direction of these transitions after each trial. The results showed that type and size of the image transformation affected change detectability. During horizontal or vertical saccades, rotations along the roll axis were the most detectable, while horizontal and vertical translations were least noticed. In a second similar study, I added a constant camera motion to simulate a head rotation, and in a third study, I compared active head rotation with a simulated rotation or a static head. I found less sensitivity to transsaccadic horizontal compared to vertical camera shifts during simulated or real head pan. Conversely, during simulated or real head tilt observers were less sensitive to transsaccadic vertical than horizontal camera shifts. In addition, in my multi-interactive inattentional blindness experiment, I compared sensitivity to sudden image transformations when a participant used their hand and gaze to move and watch an object, to when they only watched it move. The results confirmed that when involved in a primary task that requires focus and attention with two interaction modalities (gaze and hand), a visual stimuli can better be hidden than when only one sense (vision) is involved. Understanding the effect of continuous head movement and attention on the visibility of a sudden transsaccadic change can help optimize the visual performance of gaze-contingent displays and improve user experience. Perceptually suppressed rotations or translations can be used to introduce imperceptible changes in virtual camera pose in applications such as networked gaming, collaborative virtual reality and redirected walking. This dissertation suggests that such transformations can be more effective and more substantial during active or passive head motion. Moreover, inattentional blindness during an attention-demanding task provides additional opportunities for imperceptible updates to a visual display

    Perceptual Manipulations for Hiding Image Transformations in Virtual Reality

    Get PDF
    Users of a virtual reality make frequent gaze shifts and head movements to explore their surrounding environment. Saccades are rapid, ballistic, conjugate eye movements that reposition our gaze, and in doing so create large-field motion on our retina. Due to the high speed motion on the retina during saccades, the brain suppresses the visual signals from the eye, a perceptual phenomenon known as the saccadic suppression. These moments of visual blindness can help hide the display graphical updates in a virtual reality. In this dissertation, I investigated how the visibility of various image transformations differed, during combinations of saccade and head rotation conditions. Additionally, I studied how hand and gaze interaction, affected image change discrimination in an inattentional blindness task. I conducted four psychophysical experiments in desktop or head-mounted VR. In the eye tracking studies, users viewed 3D scenes, and were triggered to make a vertical or horizontal saccade. During the saccade an instantaneous translation or rotation was applied to the virtual camera used to render the scene. Participants were required to indicate the direction of these transitions after each trial. The results showed that type and size of the image transformation affected change detectability. During horizontal or vertical saccades, rotations along the roll axis were the most detectable, while horizontal and vertical translations were least noticed. In a second similar study, I added a constant camera motion to simulate a head rotation, and in a third study, I compared active head rotation with a simulated rotation or a static head. I found less sensitivity to transsaccadic horizontal compared to vertical camera shifts during simulated or real head pan. Conversely, during simulated or real head tilt observers were less sensitive to transsaccadic vertical than horizontal camera shifts. In addition, in my multi-interactive inattentional blindness experiment, I compared sensitivity to sudden image transformations when a participant used their hand and gaze to move and watch an object, to when they only watched it move. The results confirmed that when involved in a primary task that requires focus and attention with two interaction modalities (gaze and hand), a visual stimuli can better be hidden than when only one sense (vision) is involved. Understanding the effect of continuous head movement and attention on the visibility of a sudden transsaccadic change can help optimize the visual performance of gaze-contingent displays and improve user experience. Perceptually suppressed rotations or translations can be used to introduce imperceptible changes in virtual camera pose in applications such as networked gaming, collaborative virtual reality and redirected walking. This dissertation suggests that such transformations can be more effective and more substantial during active or passive head motion. Moreover, inattentional blindness during an attention-demanding task provides additional opportunities for imperceptible updates to a visual display

    Magnitude estimates of angular motion: Perception of speed and spatial orientation across visual and vestibular modalities

    Get PDF
    Both the vestibular system and optokinetic system generate conjugate eye movements in response to either movement of the head or movement of the visual surround. Both systems help to maintain gaze stability. While the VOR is most sensitive to input frequencies above .2 Hz, the optokinetic system helps maintain gaze stability at lower frequencies. Previous research on perceptual thresholds across the two sensory modalities shows that there are frequency-dependent differences between vestibular and visual perception. The purpose of this study is to extend previous vestibular psychophysics work by 1) comparing magnitude estimates from vestibular stimulation to visual stimulation across multiple frequencies, and 2) assess the feasibility of using virtual reality to provide an optokinetic stimulus equal to that of the rotary chair at frequencies where both systems are sensitive. Participants were exposed to 12 experimental conditions of angular rotation of varying frequencies and peak velocities across both sensory modalities. Vestibular stimulation was provided with a rotary chair and equivalent visual stimulation was provided with a virtual reality headset. Participants provided magnitude estimates of their speed and spatial orientation using a visual analog scale. Results reveal that speed magnitude estimates increased with peak velocity and frequency for both modalities. Spatial orientation magnitude estimates decreased with increasing frequency and increased with increasing peak velocity. Spatial orientation was underestimated under visual stimulation. Based on these results, it was concluded that at frequencies from 0.08 to 0.32 Hz, both vestibular and visual modalities provide adequate cues for motion sensitivity and virtual reality can be used as an OKN stimulus to assess motion perception (specifically speed/intensity)
    • …
    corecore