2,076 research outputs found

    Nomadic input on mobile devices: the influence of touch input technique and walking speed on performance and offset modeling

    Get PDF
    In everyday life people use their mobile phones on-the-go with different walking speeds and with different touch input techniques. Unfortunately, much of the published research in mobile interaction does not quantify the influence of these variables. In this paper, we analyze the influence of walking speed, gait pattern and input techniques on commonly used performance parameters like error rate, accuracy and tapping speed, and we compare the results to the static condition. We examine the influence of these factors on the machine learned offset model used to correct user input and we make design recommendations. The results show that all performance parameters degraded when the subject started to move, for all input techniques. Index finger pointing techniques demonstrated overall better performance compared to thumb-pointing techniques. The influence of gait phase on tap event likelihood and accuracy was demonstrated for all input techniques and all walking speeds. Finally, it was shown that the offset model built on static data did not perform as well as models inferred from dynamic data, which indicates the speed-specific nature of the models. Also, models identified using specific input techniques did not perform well when tested in other conditions, demonstrating the limited validity of offset models to a particular input technique. The model was therefore calibrated using data recorded with the appropriate input technique, at 75% of preferred walking speed, which is the speed to which users spontaneously slow down when they use a mobile device and which presents a tradeoff between accuracy and usability. This led to an increase in accuracy compared to models built on static data. The error rate was reduced between 0.05% and 5.3% for landscape-based methods and between 5.3% and 11.9% for portrait-based methods

    Typing performance of blind users:an analysis of touch behaviors, learning effect, and in-situ usage

    Get PDF
    Non-visual text-entry for people with visual impairments has focused mostly on the comparison of input techniques reporting on performance measures, such as accuracy and speed. While researchers have been able to establish that non-visual input is slow and error prone, there is little understanding on how to improve it. To develop a richer characterization of typing performance, we conducted a longitudinal study with five novice blind users. For eight weeks, we collected in-situ usage data and conducted weekly laboratory assessment sessions. This paper presents a thorough analysis of typing performance that goes beyond traditional aggregated measures of text-entry and reports on character-level errors and touch measures. Our findings show that users improve over time, even though it is at a slow rate (0.3 WPM per week). Substitutions are the most common type of error and have a significant impact on entry rates. In addition to text input data, we analyzed touch behaviors, looking at touch contact points, exploration movements, and lift positions. We provide insights on why and how performance improvements and errors occur. Finally, we derive some implications that should inform the design of future virtual keyboards for non-visual input. Categories and Subject Descriptors H.5.2 [Information Interfaces and Presentation]: User Interfaces- Input devices and strategies. K4.2 [Computers an

    Modelling and correcting for the impact of the gait cycle on touch screen typing accuracy

    Get PDF
    Walking and typing on a smartphone is an extremely common interaction. Previous research has shown that error rates are higher when walking than when stationary. In this paper we analyse the acceleration data logged in an experiment in which users typed whilst walking, and extract the gait phase angle. We find statistically significant relationships between tapping time, error rate and gait phase angle. We then use the gait phase as an additional input to an offset model, and show that this allows more accurate touch interaction for walking users than a model which considers only the recorded tap position

    TapGazer:Text Entry with finger tapping and gaze-directed word selection

    Get PDF

    Improving the Accuracy of Mobile Touchscreen QWERTY Keyboards

    Get PDF
    In this thesis we explore alternative keyboard layouts in hopes of finding one that increases the accuracy of text input on mobile touchscreen devices. In particular, we investigate if a single swap of 2 keys can significantly improve accuracy on mobile touchscreen QWERTY keyboards. We do so by carefully considering the placement of keys, exploiting a specific vulnerability that occurs within a keyboard layout, namely, that the placement of particular keys next to others may be increasing errors when typing. We simulate the act of typing on a mobile touchscreen QWERTY keyboard, beginning with modeling the typographical errors that can occur when doing so. We then construct a simple autocorrector using Bayesian methods, describing how we can autocorrect user input and evaluate the ability of the keyboard to output the correct text. Then, using our models, we provide methods of testing and define a metric, the WAR rating, which provides us a way of comparing the accuracy of a keyboard layout. After running our tests on all 325 2-key swap layouts against the original QWERTY layout, we show that there exists more than one 2-key swap that increases the accuracy of the current QWERTY layout, and that the best 2-key swap is i ↔ t, increasing accuracy by nearly 0.18 percent

    Touchscreen Software Keyboard for Finger Typing

    Get PDF

    Using Hover to Compromise the Confidentiality of User Input on Android

    Full text link
    We show that the new hover (floating touch) technology, available in a number of today's smartphone models, can be abused by any Android application running with a common SYSTEM_ALERT_WINDOW permission to record all touchscreen input into other applications. Leveraging this attack, a malicious application running on the system is therefore able to profile user's behavior, capture sensitive input such as passwords and PINs as well as record all user's social interactions. To evaluate our attack we implemented Hoover, a proof-of-concept malicious application that runs in the system background and records all input to foreground applications. We evaluated Hoover with 40 users, across two different Android devices and two input methods, stylus and finger. In the case of touchscreen input by finger, Hoover estimated the positions of users' clicks within an error of 100 pixels and keyboard input with an accuracy of 79%. Hoover captured users' input by stylus even more accurately, estimating users' clicks within 2 pixels and keyboard input with an accuracy of 98%. We discuss ways of mitigating this attack and show that this cannot be done by simply restricting access to permissions or imposing additional cognitive load on the users since this would significantly constrain the intended use of the hover technology.Comment: 11 page

    Multi-touch 3D Exploratory Analysis of Ocean Flow Models

    Get PDF
    Modern ocean flow simulations are generating increasingly complex, multi-layer 3D ocean flow models. However, most researchers are still using traditional 2D visualizations to visualize these models one slice at a time. Properly designed 3D visualization tools can be highly effective for revealing the complex, dynamic flow patterns and structures present in these models. However, the transition from visualizing ocean flow patterns in 2D to 3D presents many challenges, including occlusion and depth ambiguity. Further complications arise from the interaction methods required to navigate, explore, and interact with these 3D datasets. We present a system that employs a combination of stereoscopic rendering, to best reveal and illustrate 3D structures and patterns, and multi-touch interaction, to allow for natural and efficient navigation and manipulation within the 3D environment. Exploratory visual analysis is facilitated through the use of a highly-interactive toolset which leverages a smart particle system. Multi-touch gestures allow users to quickly position dye emitting tools within the 3D model. Finally, we illustrate the potential applications of our system through examples of real world significance
    • …
    corecore