7,054 research outputs found

    ReconViguRation: Reconfiguring Physical Keyboards in Virtual Reality.

    Get PDF
    Physical keyboards are common peripherals for personal computers and are efficient standard text entry devices. Recent research has investigated how physical keyboards can be used in immersive head-mounted display-based Virtual Reality (VR). So far, the physical layout of keyboards has typically been transplanted into VR for replicating typing experiences in a standard desktop environment. In this paper, we explore how to fully leverage the immersiveness of VR to change the input and output characteristics of physical keyboard interaction within a VR environment. This allows individual physical keys to be reconfigured to the same or different actions and visual output to be distributed in various ways across the VR representation of the keyboard. We explore a set of input and output mappings for reconfiguring the virtual presentation of physical keyboards and probe the resulting design space by specifically designing, implementing and evaluating nine VR-relevant applications: emojis, languages and special characters, application shortcuts, virtual text processing macros, a window manager, a photo browser, a whack-a-mole game, secure password entry and a virtual touch bar. We investigate the feasibility of the applications in a user study with 20 participants and find that, among other things, they are usable in VR. We discuss the limitations and possibilities of remapping the input and output characteristics of physical keyboards in VR based on empirical findings and analysis and suggest future research directions in this area

    TapGazer:Text Entry with finger tapping and gaze-directed word selection

    Get PDF

    Facilitating Keyboard Use While Wearing a Head-Mounted Display

    Get PDF
    Virtual reality (VR) headsets are becoming more common and will require evolving input mechanisms to support a growing range of applications. Because VR devices require users to wear head-mounted displays, there are accomodations that must be made in order to support specific input devices. One such device, a keyboard, serves as a useful tool for text entry. Many users will require assistance towards using a keyboard when wearing a head-mounted display. Developers have explored new mechanisms to overcome the challenges of text-entry for virtual reality. Several games have toyed with the idea of using motion controllers to provide a text entry mechanism, however few investigations have made on how to assist users in using a physical keyboard while wearing a head-mounted display. As an alternative to controller based text input, I propose that a software tool could facilitate the use of a physical keyboard in virtual reality. Using computer vision, a user€™s hands could be projected into the virtual world. With the ability to see the location of their hands relative to the keyboard, users will be able to type despite the obstruction caused by the head-mounted display (HMD). The viability of this approach was tested and the tool released as a plugin for the Unity development platform. The potential uses for the plugin go beyond text entry, and the project can be expanded to include many physical input devices

    Interaction Methods for Smart Glasses : A Survey

    Get PDF
    Since the launch of Google Glass in 2014, smart glasses have mainly been designed to support micro-interactions. The ultimate goal for them to become an augmented reality interface has not yet been attained due to an encumbrance of controls. Augmented reality involves superimposing interactive computer graphics images onto physical objects in the real world. This survey reviews current research issues in the area of human-computer interaction for smart glasses. The survey first studies the smart glasses available in the market and afterwards investigates the interaction methods proposed in the wide body of literature. The interaction methods can be classified into hand-held, touch, and touchless input. This paper mainly focuses on the touch and touchless input. Touch input can be further divided into on-device and on-body, while touchless input can be classified into hands-free and freehand. Next, we summarize the existing research efforts and trends, in which touch and touchless input are evaluated by a total of eight interaction goals. Finally, we discuss several key design challenges and the possibility of multi-modal input for smart glasses.Peer reviewe

    Detecting Surface Interactions via a Wearable Microphone to Improve Augmented Reality Text Entry

    Get PDF
    This thesis investigates whether we can detect and distinguish between surface interaction events such as tapping or swiping using a wearable mic from a surface. Also, what are the advantages of new text entry methods such as tapping with two fingers simultaneously to enter capital letters and punctuation? For this purpose, we conducted a remote study to collect audio and video of three different ways people might interact with a surface. We also built a CNN classifier to detect taps. Our results show that we can detect and distinguish between surface interaction events such as tap or swipe via a wearable mic on the user\u27s head

    Text Entry Performance and Situation Awareness of a Joint Optical See-Through Head-Mounted Display and Smartphone System

    Full text link
    Optical see-through head-mounted displays (OST HMDs) are a popular output medium for mobile Augmented Reality (AR) applications. To date, they lack efficient text entry techniques. Smartphones are a major text entry medium in mobile contexts but attentional demands can contribute to accidents while typing on the go. Mobile multi-display ecologies, such as combined OST HMD-smartphone systems, promise performance and situation awareness benefits over single-device use. We study the joint performance of text entry on mobile phones with text output on optical see-through head-mounted displays. A series of five experiments with a total of 86 participants indicate that, as of today, the challenges in such a joint interactive system outweigh the potential benefits.Comment: To appear in IEEE Transactions on Visualization and Computer Graphics On page(s): 1-17 Print ISSN: 1077-2626 Online ISSN: 1077-262

    Can I Borrow Your ATM? Using Virtual Reality for (Simulated) In Situ Authentication Research

    Get PDF
    In situ evaluations of novel authentication systems, where the system is evaluated in its intended usage context, are often infeasible due to ethical and legal constraints. Consequently, researchers evaluate their authentication systems in the lab, which questions the eco-logical validity. In this work, we explore how VR can overcome the shortcomings of authentication studies conducted in the lab and contribute towards more realistic authentication research. We built a highly realistic automated teller machine (ATM) and a VR replica to investigate through a user study (N=20) the impact of in situ evaluations on an authentication system‘s usability results. We evaluated and compared: Lab studies in the real world, lab studies in VR, in situ studies in the real world, and in situ studies in VR. Our findings highlight 1) VR‘s great potential to circumvent potential restrictions researchers experience when evaluating authentication schemes and 2) the impact of the context on an authentication system‘s usability evaluation results. In situ ATM authentications took longer (+24.71% in the real world, +14.17% in VR) than authentications in a traditional (VR) lab environment and elicited a higher sense of being part of an ATM authentication scenario compared to a real-world and VR-based evaluation in the lab. Our quantitative findings, along with participants‘ qualitative feedback, provide first evidence of increased authentication realism when using VR for in situ authentication research. We provide researchers with a novel research approach to conduct (simulated) in situ authentication re-search, discuss our findings in the light of prior works, and conclude with three key lessons to support researchers in deciding when to use VR for in situ authentication research
    • …
    corecore