17 research outputs found

    Design and Evaluation of Controller-based Raycasting Methods for Efficient Alphanumeric and Special Character Entry in Virtual Reality

    Get PDF
    Alphanumeric and special characters are essential during text entry. Text entry in virtual reality (VR) is usually performed on a virtual Qwerty keyboard to minimize the need to learn new layouts. As such, entering capitals, symbols, and numbers in VR is often a direct migration from a physical/touchscreen Qwerty keyboard—that is, using the mode-switching keys to switch between different types of characters and symbols. However, there are inherent differences between a keyboard in VR and a physical/touchscreen keyboard, and as such, a direct adaptation of mode-switching via switch keys may not be suitable for VR. The high flexibility afforded by VR opens up more possibilities for entering alphanumeric and special characters using the Qwerty layout. In this work, we designed two controller-based raycasting text entry methods for alphanumeric and special characters input (Layer-ButtonSwitch and Key-ButtonSwitch) and compared them with two other methods (Standard Qwerty Keyboard and Layer-PointSwitch) that were derived from physical and soft Qwerty keyboards. We explored the performance and user preference of these four methods via two user studies (one short-term and one prolonged use), where participants were instructed to input text containing alphanumeric and special characters. Our results show that Layer-ButtonSwitch led to the highest statistically significant performance, followed by Key-ButtonSwitch and Standard Qwerty Keyboard, while Layer-PointSwitch had the slowest speed. With continuous practice, participants' performance using Key-ButtonSwitch reached that of Layer-ButtonSwitch. Further, the results show that the key-level layout used in Key-ButtonSwitch led users to parallel mode switching and character input operations because this layout showed all characters on one layer. We distill three recommendations from th results that can help guide the design of text entry techniques for alphanumeric and special characters in VR

    XAIR: A Framework of Explainable AI in Augmented Reality

    Full text link
    Explainable AI (XAI) has established itself as an important component of AI-driven interactive systems. With Augmented Reality (AR) becoming more integrated in daily lives, the role of XAI also becomes essential in AR because end-users will frequently interact with intelligent services. However, it is unclear how to design effective XAI experiences for AR. We propose XAIR, a design framework that addresses "when", "what", and "how" to provide explanations of AI output in AR. The framework was based on a multi-disciplinary literature review of XAI and HCI research, a large-scale survey probing 500+ end-users' preferences for AR-based explanations, and three workshops with 12 experts collecting their insights about XAI design in AR. XAIR's utility and effectiveness was verified via a study with 10 designers and another study with 12 end-users. XAIR can provide guidelines for designers, inspiring them to identify new design opportunities and achieve effective XAI designs in AR.Comment: Proceedings of the 2023 CHI Conference on Human Factors in Computing System

    Shen, Junxiao

    No full text

    Shen, Junxiao

    No full text
    corecore