24 research outputs found

    A head coupled cursor for 2D selection in virtual reality

    No full text
    We present a head-coupled cursor to support 2D selection in HMDs. The head-coupled cursor can be used with any 2DOF input source and unlike other head-based selection methods, the cursor can be moved independently within the screen plane of the HMD, while remaining in the HMD’s field of view (FOV). We propose an experiment to evaluate the head-coupled cursor in future work

    VR collide! Comparing collision-avoidance methods between colocated virtual reality users

    No full text
    We present a pilot study comparing visual feedback mechanisms for preventing physical collisions between co-located VR users. These include Avatar (a 3D avatar in co-located with the other user), BoundingBox (similar to HTC's "chaperone"), and CameraOverlay (live video feed overlaid on the virtual environment). Using a simulated second user, we found that CameraOverlay and Avatar had the fastest travel time around an obstacle, but BoundingBox had the fewest collisions at 0.07 collision/trial versus 0.2 collisions/trial for Avatar and 0.4 collisions/trial for CameraOverlay. However, subjective participant impressions strongly favoured Avatar and CameraOverlay over BoundingBox. Based on these results, we propose future studies on hybrid methods combining the best aspects of Avatar (speed, user preference) and BoundingBox (safety)

    Camera-based selection with cardboard HMDs

    No full text
    We present a study of selection techniques for low-cost mobile VR devices, such as Google Cardboard, using the outward facing camera on modern smartphones. We compared three selection techniques, air touch, head ray, and finger ray. Initial evaluation indicates that hand-based selection (air touch) was the worst. A ray cast using the tracked finger position offered much higher selection performance. Our results suggest that camera-based mobile tracking is feasible with ray-based techniques

    EZCursorVR: 2D selection with virtual reality head-mounted displays

    No full text
    We present an evaluation of a new selection technique for virtual reality (VR) systems presented on head-mounted displays. The technique, dubbed EZCursorVR, presents a 2D cursor that moves in a head-fixed plane, simulating 2D desktop-like cursor control for VR. The cursor can be controlled by any 2DOF input device, but also works with 3/6DOF devices using appropriate mappings. We conducted an experiment based on ISO 9241-9, comparing the effectiveness of EZCursorVR using a mouse, a joystick in both velocity-control and position-control mappings, a 2D-constrained ray-based technique, a standard 3D ray, and finally selection via head motion. Results indicate that the mouse offered the highest performance in terms of throughput, movement time, and error rate, while the position-control joystick was worst. The 2D-constrained ray-casting technique proved an effective alternative to the mouse when performing selections using EZCursorVR, offering better performance than standard ray-based selection

    Viewpoint snapping to reduce cybersickness in virtual reality

    No full text
    Cybersickness in virtual reality (VR) is an on-going problem, despite recent advances in technology. In this paper, we propose a method for reducing the likelihood of cybersickness onset when using stationary (e.g., seated) VR setups. Our approach relies on reducing optic flow via inconsistent displacement - the viewpoint is "snapped" during fast movement that would otherwise induce cybersickness. We compared our technique, which we call viewpoint snapping, to a control condition without viewpoint snapping, in a custom-developed VR first-person shooter game. We measured participant cybersickness levels via the Simulator Sickness Questionnaire (SSQ), and user reported levels of nausea, presence, and objective error rate. Overall, our results indicate that viewpoint snapping significantly reduced SSQ reported cybersickness levels by about 40% and resulted in a reduction in participant nausea levels, especially with longer VR exposure. Presence levels and error rate were not significantly different between the viewpoint snapping and the control condition

    Head vs. Eye-based selection in virtual reality

    No full text
    This demo presents a VR system for comparing eye and head-based selection performance using the recently released FOVE. The system presents a virtual environment modelled after the ISO 9241-9 reciprocal selection task, with targets presented at varying depths. We have used the system to compare eye-based selection, and head-based selection (i.e., gaze direction) in isolation, and a third condition which used both eye-tracking and head-tracking at once

    Player performance with different input devices in virtual reality first-person shooter games

    No full text
    First-person shooter (FPS) games are a competitive game genre. Players of these games commonly try to maximize their performance through using a better input device. Numerous previous studies have analyzed different game controllers (see e.g., [1]). Tracked input devices such as the Hydra offer some advantages over desktop input devices in VR FPS games. We thus hypothesize that VR controllers will offer substantially better performance than both the mouse and gamepad in first-person shooter targeting, due to the improved naturalness of control. Our study compared 3D selection performance between the mouse, 3D tracker, and game controller in a head-mounted display VR context

    Serious mods: A case for modding in serious games pedagogy

    No full text
    In this paper, we present a case study for the use of modding as a pedagogical practice in a humanities-based game design course. In particular, this approach has been extremely beneficial as it allows students to sidestep technological barriers. In our case study, we show how two different mods of the same platformer game can allow students to engage with game design in order to explore the relationship between mechanics and meaningful play

    The eyes don’t have it: An empirical comparison of head-based and eye-based selection in virtual reality

    No full text
    We present a study comparing selection performance between three eye/head interaction techniques using the recently released FOVE head-mounted display (HMD). The FOVE offers an integrated eye tracker, which we use as an alternative to potentially fatiguing and uncomfortable head-based selection used with other commercial devices. Our experiment was modelled after the ISO 9241-9 reciprocal selection task, with targets presented at varying depths in a custom virtual environment. We compared eye-based selection, and head-based selection (i.e., gaze direction) in isolation, and a third condition which used both eye-tracking and head-tracking at once. Results indicate that eye-only selection offered the worst performance in terms of error rate, selection times, and throughput. Head-only selection offered significantly better performance
    corecore