5,380 research outputs found

    Using Pinch Gloves(TM) for both Natural and Abstract Interaction Techniques in Virtual Environments

    Get PDF
    Usable three-dimensional (3D) interaction techniques are difficult to design, implement, and evaluate. One reason for this is a poor understanding of the advantages and disadvantages of the wide range of 3D input devices, and of the mapping between input devices and interaction techniques. We present an analysis of Pinch Glovesâ„¢ and their use as input devices for virtual environments (VEs). We have developed a number of novel and usable interaction techniques for VEs using the gloves, including a menu system, a technique for text input, and a two-handed navigation technique. User studies have indicated the usability and utility of these techniques

    Mapping Tasks to Interactions for Graph Exploration and Graph Editing on Interactive Surfaces

    Full text link
    Graph exploration and editing are still mostly considered independently and systems to work with are not designed for todays interactive surfaces like smartphones, tablets or tabletops. When developing a system for those modern devices that supports both graph exploration and graph editing, it is necessary to 1) identify what basic tasks need to be supported, 2) what interactions can be used, and 3) how to map these tasks and interactions. This technical report provides a list of basic interaction tasks for graph exploration and editing as a result of an extensive system review. Moreover, different interaction modalities of interactive surfaces are reviewed according to their interaction vocabulary and further degrees of freedom that can be used to make interactions distinguishable are discussed. Beyond the scope of graph exploration and editing, we provide an approach for finding and evaluating a mapping from tasks to interactions, that is generally applicable. Thus, this work acts as a guideline for developing a system for graph exploration and editing that is specifically designed for interactive surfaces.Comment: 21 pages, minor corrections (typos etc.

    Nomadic input on mobile devices: the influence of touch input technique and walking speed on performance and offset modeling

    Get PDF
    In everyday life people use their mobile phones on-the-go with different walking speeds and with different touch input techniques. Unfortunately, much of the published research in mobile interaction does not quantify the influence of these variables. In this paper, we analyze the influence of walking speed, gait pattern and input techniques on commonly used performance parameters like error rate, accuracy and tapping speed, and we compare the results to the static condition. We examine the influence of these factors on the machine learned offset model used to correct user input and we make design recommendations. The results show that all performance parameters degraded when the subject started to move, for all input techniques. Index finger pointing techniques demonstrated overall better performance compared to thumb-pointing techniques. The influence of gait phase on tap event likelihood and accuracy was demonstrated for all input techniques and all walking speeds. Finally, it was shown that the offset model built on static data did not perform as well as models inferred from dynamic data, which indicates the speed-specific nature of the models. Also, models identified using specific input techniques did not perform well when tested in other conditions, demonstrating the limited validity of offset models to a particular input technique. The model was therefore calibrated using data recorded with the appropriate input technique, at 75% of preferred walking speed, which is the speed to which users spontaneously slow down when they use a mobile device and which presents a tradeoff between accuracy and usability. This led to an increase in accuracy compared to models built on static data. The error rate was reduced between 0.05% and 5.3% for landscape-based methods and between 5.3% and 11.9% for portrait-based methods

    Human-computer interaction in ubiquitous computing environments

    Full text link
    Purpose &ndash; The purpose of this paper is to explore characteristics of human-computer interaction when the human body and its movements become input for interaction and interface control in pervasive computing settings. Design/methodology/approach &ndash; The paper quantifies the performance of human movement based on Fitt\u27s Law and discusses some of the human factors and technical considerations that arise in trying to use human body movements as an input medium. Findings &ndash; The paper finds that new interaction technologies utilising human movements may provide more flexible, naturalistic interfaces and support the ubiquitous or pervasive computing paradigm. Practical implications &ndash; In pervasive computing environments the challenge is to create intuitive and user-friendly interfaces. Application domains that may utilize human body movements as input are surveyed here and the paper addresses issues such as culture, privacy, security and ethics raised by movement of a user\u27s body-based interaction styles. Originality/value &ndash; The paper describes the utilization of human body movements as input for interaction and interface control in pervasive computing settings. <br /

    Can Virtual Reality Protect Users from Keystroke Inference Attacks?

    Full text link
    Virtual Reality (VR) has gained popularity by providing immersive and interactive experiences without geographical limitations. It also provides a sense of personal privacy through physical separation. In this paper, we show that despite assumptions of enhanced privacy, VR is unable to shield its users from side-channel attacks that steal private information. Ironically, this vulnerability arises from VR's greatest strength, its immersive and interactive nature. We demonstrate this by designing and implementing a new set of keystroke inference attacks in shared virtual environments, where an attacker (VR user) can recover the content typed by another VR user by observing their avatar. While the avatar displays noisy telemetry of the user's hand motion, an intelligent attacker can use that data to recognize typed keys and reconstruct typed content, without knowing the keyboard layout or gathering labeled data. We evaluate the proposed attacks using IRB-approved user studies across multiple VR scenarios. For 13 out of 15 tested users, our attacks accurately recognize 86%-98% of typed keys, and the recovered content retains up to 98% of the meaning of the original typed content. We also discuss potential defenses.Comment: Accepted by USENIX 202

    Multi-touch 3D Exploratory Analysis of Ocean Flow Models

    Get PDF
    Modern ocean flow simulations are generating increasingly complex, multi-layer 3D ocean flow models. However, most researchers are still using traditional 2D visualizations to visualize these models one slice at a time. Properly designed 3D visualization tools can be highly effective for revealing the complex, dynamic flow patterns and structures present in these models. However, the transition from visualizing ocean flow patterns in 2D to 3D presents many challenges, including occlusion and depth ambiguity. Further complications arise from the interaction methods required to navigate, explore, and interact with these 3D datasets. We present a system that employs a combination of stereoscopic rendering, to best reveal and illustrate 3D structures and patterns, and multi-touch interaction, to allow for natural and efficient navigation and manipulation within the 3D environment. Exploratory visual analysis is facilitated through the use of a highly-interactive toolset which leverages a smart particle system. Multi-touch gestures allow users to quickly position dye emitting tools within the 3D model. Finally, we illustrate the potential applications of our system through examples of real world significance

    Text Entry in Immersive Head-Mounted Display-Based Virtual Reality Using Standard Keyboards

    Get PDF
    We study the performance and user experience of two popular mainstream text entry devices, desktop keyboards and touchscreen keyboards, for use in Virtual Reality (VR) applications. We discuss the limitations arising from limited visual feedback, and examine the efficiency of different strategies of use. We analyze a total of 24 hours of typing data in VR from 24 participants and find that novice users are able to retain about 60% of their typing speed on a desktop keyboard and about 40-45\% of their typing speed on a touchscreen keyboard. We also find no significant learning effects, indicating that users can transfer their typing skills fast into VR. Besides investigating baseline performances, we study the position in which keyboards and hands are rendered in space. We find that this does not adversely affect performance for desktop keyboard typing and results in a performance trade-off for touchscreen keyboard typing

    Ability-Based Methods for Personalized Keyboard Generation

    Full text link
    This study introduces an ability-based method for personalized keyboard generation, wherein an individual's own movement and human-computer interaction data are used to automatically compute a personalized virtual keyboard layout. Our approach integrates a multidirectional point-select task to characterize cursor control over time, distance, and direction. The characterization is automatically employed to develop a computationally efficient keyboard layout that prioritizes each user's movement abilities through capturing directional constraints and preferences. We evaluated our approach in a study involving 16 participants using inertial sensing and facial electromyography as an access method, resulting in significantly increased communication rates using the personalized keyboard (52.0 bits/min) when compared to a generically optimized keyboard (47.9 bits/min). Our results demonstrate the ability to effectively characterize an individual's movement abilities to design a personalized keyboard for improved communication. This work underscores the importance of integrating a user's motor abilities when designing virtual interfaces.Comment: 20 pages, 7 figure
    • …
    corecore