5,104 research outputs found

    Nomadic input on mobile devices: the influence of touch input technique and walking speed on performance and offset modeling

    Get PDF
    In everyday life people use their mobile phones on-the-go with different walking speeds and with different touch input techniques. Unfortunately, much of the published research in mobile interaction does not quantify the influence of these variables. In this paper, we analyze the influence of walking speed, gait pattern and input techniques on commonly used performance parameters like error rate, accuracy and tapping speed, and we compare the results to the static condition. We examine the influence of these factors on the machine learned offset model used to correct user input and we make design recommendations. The results show that all performance parameters degraded when the subject started to move, for all input techniques. Index finger pointing techniques demonstrated overall better performance compared to thumb-pointing techniques. The influence of gait phase on tap event likelihood and accuracy was demonstrated for all input techniques and all walking speeds. Finally, it was shown that the offset model built on static data did not perform as well as models inferred from dynamic data, which indicates the speed-specific nature of the models. Also, models identified using specific input techniques did not perform well when tested in other conditions, demonstrating the limited validity of offset models to a particular input technique. The model was therefore calibrated using data recorded with the appropriate input technique, at 75% of preferred walking speed, which is the speed to which users spontaneously slow down when they use a mobile device and which presents a tradeoff between accuracy and usability. This led to an increase in accuracy compared to models built on static data. The error rate was reduced between 0.05% and 5.3% for landscape-based methods and between 5.3% and 11.9% for portrait-based methods

    Multi-touch For General-purpose Computing An Examination Of Text Entry

    Get PDF
    In recent years, multi-touch has been heralded as a revolution in humancomputer interaction. Multi-touch provides features such as gestural interaction, tangible interfaces, pen-based computing, and interface customization – features embraced by an increasingly tech-savvy public. However, multi-touch platforms have not been adopted as everyday computer interaction devices; that is, multi-touch has not been applied to general-purpose computing. The questions this thesis seeks to address are: Will the general public adopt these systems as their chief interaction paradigm? Can multi-touch provide such a compelling platform that it displaces the desktop mouse and keyboard? Is multi-touch truly the next revolution in human-computer interaction? As a first step toward answering these questions, we observe that generalpurpose computing relies on text input, and ask: Can multi-touch, without a text entry peripheral, provide a platform for efficient text entry? And, by extension, is such a platform viable for general-purpose computing? We investigate these questions through four user studies that collected objective and subjective data for text entry and word processing tasks. The first of these studies establishes a benchmark for text entry performance on a multi-touch platform, across a variety of input modes. The second study attempts to improve this performance by iv examining an alternate input technique. The third and fourth studies include mousestyle interaction for formatting rich-text on a multi-touch platform, in the context of a word processing task. These studies establish a foundation for future efforts in general-purpose computing on a multi-touch platform. Furthermore, this work details deficiencies in tactile feedback with modern multi-touch platforms, and describes an exploration of audible feedback. Finally, the thesis conveys a vision for a general-purpose multi-touch platform, its design and rationale

    Facilitating Keyboard Use While Wearing a Head-Mounted Display

    Get PDF
    Virtual reality (VR) headsets are becoming more common and will require evolving input mechanisms to support a growing range of applications. Because VR devices require users to wear head-mounted displays, there are accomodations that must be made in order to support specific input devices. One such device, a keyboard, serves as a useful tool for text entry. Many users will require assistance towards using a keyboard when wearing a head-mounted display. Developers have explored new mechanisms to overcome the challenges of text-entry for virtual reality. Several games have toyed with the idea of using motion controllers to provide a text entry mechanism, however few investigations have made on how to assist users in using a physical keyboard while wearing a head-mounted display. As an alternative to controller based text input, I propose that a software tool could facilitate the use of a physical keyboard in virtual reality. Using computer vision, a user€™s hands could be projected into the virtual world. With the ability to see the location of their hands relative to the keyboard, users will be able to type despite the obstruction caused by the head-mounted display (HMD). The viability of this approach was tested and the tool released as a plugin for the Unity development platform. The potential uses for the plugin go beyond text entry, and the project can be expanded to include many physical input devices

    TapGazer:Text Entry with finger tapping and gaze-directed word selection

    Get PDF

    The Effect of Tactile and Audio Feedback in Handheld Mobile Text Entry

    Get PDF
    Effects of tactile and audio feedback are examined in the context of touchscreen and mobile use. Prior experimental research is graphically summarized by task type (handheld text entry, tabletop text entry, non-text input), tactile feedback type (active, passive), and significant findings, revealing a research gap evaluating passive tactile feedback in handheld text entry (a.k.a. texting ). A passive custom tactile overlay is evaluated in a new experiment wherein 24 participants perform a handheld text entry task on an iPhone under four tactile and audio feedback conditions with measures of text entry speed and accuracy. Results indicate audio feedback produces better performance, while the tactile overlay degrades performance, consistent with reviewed literature. Contrary to previous findings, the combined feedback condition did not produce improved performance. Findings are discussed in light of skill-based behavior and feed-forward control principles described by Gibson (1966) and Rasmussen (1983)

    Efficient tongue-computer interfacing for people with upper-limb impairments

    Get PDF
    • …
    corecore