1,151 research outputs found

    Exploring the Front Touch Interface for Virtual Reality Headsets

    Full text link
    In this paper, we propose a new interface for virtual reality headset: a touchpad in front of the headset. To demonstrate the feasibility of the front touch interface, we built a prototype device, explored VR UI design space expansion, and performed various user studies. We started with preliminary tests to see how intuitively and accurately people can interact with the front touchpad. Then, we further experimented various user interfaces such as a binary selection, a typical menu layout, and a keyboard. Two-Finger and Drag-n-Tap were also explored to find the appropriate selection technique. As a low-cost, light-weight, and in low power budget technology, a touch sensor can make an ideal interface for mobile headset. Also, front touch area can be large enough to allow wide range of interaction types such as multi-finger interactions. With this novel front touch interface, we paved a way to new virtual reality interaction methods

    INVESTIGATING MIDAIR VIRTUAL KEYBOARD INPUT USING A HEAD MOUNTED DISPLAY

    Get PDF
    Until recently text entry in virtual reality has been limited to using hand-held controllers. These techniques of text entry are feasible only for entering short texts like usernames and passwords. But recent improvements in virtual reality devices have paved the way to varied interactions in virtual environment and many of these tasks include annotation, text messaging, etc. These tasks require an effective way of text entry in virtual reality. We present an interactive midair text entry system in virtual reality which allows users to use their one or both hands as the means of entering text. Our system also allows users to enter text on a split keyboard using their two hands. We investigated user performance on these three conditions and found that users were slightly faster when they were using both hands. In this case, the mean entry rate was 16.4 words-per-minute (wpm). While using one hand, the entry rate was 16.1 wpm and using the split keyboard the entry rate was 14.7 wpm. The character error rates (CER) in these conditions were 0.74%, 0.79% and 1.41% respectively. We also examined the extent to which a user can enter text without having any visual feedback of a keyboard i.e. on an invisible keyboard in the virtual environment. While some found it difficult, results were promising for a subset of 15 participants of the 22 participants. The subset had a mean entry rate of 10.0 wpm and a mean error rate of 2.98%

    Typing the Future: Designing Multimodal AR Keyboards

    Get PDF
    Recent demonstrations of AR showcase engaging spatial features while avoiding text input. However, this is not due to descending relevance but rather because no satisfactory solution to text input in a comprehensive AR system is available yet. Any novel technological device requires rethinking the way we interact with it, including text input. With its variety of sensors, AR devices offer numerous possibilities for uni- and multimodal interaction. However, it is essential to evaluate the actual problem space before suggesting solutions. In our design science research project, we aim to create design knowledge about the learnability and performance of AR keyboards. Based on transfer of learning theory and HCI literature on virtual keyboards, we propose meta requirements and initial design principles that serve as basis for developing a multimodal AR keyboard prototype

    Fast and precise touch-based text entry for head-mounted augmented reality with variable occlusion

    Get PDF
    We present the VISAR keyboard: An augmented reality (AR) head-mounted display (HMD) system that supports text entry via a virtualised input surface. Users select keys on the virtual keyboard by imitating the process of single-hand typing on a physical touchscreen display. Our system uses a statistical decoder to infer usersā€™ intended text and to provide error-tolerant predictions. There is also a high-precision fall-back mechanism to support users in indicating which keys should be unmodified by the auto-correction process. A unique advantage of leveraging the well-established touch input paradigm is that our system enables text entry with minimal visual clutter on the see-through display, thus preserving the userā€™s field-of-view. We iteratively designed and evaluated our system and show that the final iteration of the system supports a mean entry rate of 17.75wpm with a mean character error rate less than 1%. This performance represents a 19.6% improvement relative to the state-of-the-art baseline investigated: A gaze-then-gesture text entry technique derived from the system keyboard on the Microsoft HoloLens. Finally, we validate that the system is effective in supporting text entry in a fully mobile usage scenario likely to be encountered in industrial applications of AR HMDs.Per Ola Kristensson was supported in part by a Google Faculty research award and EPSRC grants EP/N010558/1 and EP/N014278/1. Keith Vertanen was supported in part by a Google Faculty research award. John Dudley was supported by the Trimble Fund

    Text Entry in Immersive Head-Mounted Display-Based Virtual Reality Using Standard Keyboards

    Get PDF
    We study the performance and user experience of two popular mainstream text entry devices, desktop keyboards and touchscreen keyboards, for use in Virtual Reality (VR) applications. We discuss the limitations arising from limited visual feedback, and examine the efficiency of different strategies of use. We analyze a total of 24 hours of typing data in VR from 24 participants and find that novice users are able to retain about 60% of their typing speed on a desktop keyboard and about 40-45\% of their typing speed on a touchscreen keyboard. We also find no significant learning effects, indicating that users can transfer their typing skills fast into VR. Besides investigating baseline performances, we study the position in which keyboards and hands are rendered in space. We find that this does not adversely affect performance for desktop keyboard typing and results in a performance trade-off for touchscreen keyboard typing

    Mid-Air Haptics for Control Interfaces

    Get PDF
    Control interfaces and interactions based on touch-less gesture tracking devices have become a prevalent research topic in both industry and academia. Touch-less devices offer a unique interaction immediateness that makes them ideal for applications where direct contact with a physical controller is not desirable. On the other hand, these controllers inherently lack active or passive haptic feedback to inform users about the results of their interaction. Mid-air haptic interfaces, such as those using focused ultrasound waves, can close the feedback loop and provide new tools for the design of touch-less, un-instrumented control interactions. The goal of this workshop is to bring together the growing mid-air haptic research community to identify and discuss future challenges in control interfaces and their application in AR/VR, automotive, music, robotics and teleoperation

    Predictive text-entry in immersive environments

    Get PDF
    Virtual Reality (VR) has progressed significantly since its conception, enabling previously impossible applications such as virtual prototyping, telepresence, and augmented reality However, text-entry remains a difficult problem for immersive environments (Bowman et al, 2001b, Mine et al , 1997). Wearing a head-mounted display (HMD) and datagloves affords a wealth of new interaction techniques. However, users no longer have access to traditional input devices such as a keyboard. Although VR allows for more natural interfaces, there is still a need for simple, yet effective, data-entry techniques. Examples include communicating in a collaborative environment, accessing system commands, or leaving an annotation for a designer m an architectural walkthrough (Bowman et al, 2001b). This thesis presents the design, implementation, and evaluation of a predictive text-entry technique for immersive environments which combines 5DT datagloves, a graphically represented keyboard, and a predictive spelling paradigm. It evaluates the fundamental factors affecting the use of such a technique. These include keyboard layout, prediction accuracy, gesture recognition, and interaction techniques. Finally, it details the results of user experiments, and provides a set of recommendations for the future use of such a technique in immersive environments

    TapGazer:Text Entry with finger tapping and gaze-directed word selection

    Get PDF
    While using VR, efficient text entry is a challenge: users cannot easily locate standard physical keyboards, and keys are often out of reach, e.g. when standing. We present TapGazer, a text entry system where users type by tapping their fingers in place. Users can tap anywhere as long as the identity of each tapping finger can be detected with sensors. Ambiguity between different possible input words is resolved by selecting target words with gaze. If gaze tracking is unavailable, ambiguity is resolved by selecting target words with additional taps. We evaluated TapGazer for seated and standing VR: seated novice users using touchpads as tap surfaces reached 44.81 words per minute (WPM), 79.17% of their QWERTY typing speed. Standing novice users tapped on their thighs with touch-sensitive gloves, reaching 45.26 WPM (71.91%). We analyze TapGazer with a theoretical performance model and discuss its potential for text input in future AR scenarios.</p
    • ā€¦
    corecore