110 research outputs found

    Nomadic input on mobile devices: the influence of touch input technique and walking speed on performance and offset modeling

    Get PDF
    In everyday life people use their mobile phones on-the-go with different walking speeds and with different touch input techniques. Unfortunately, much of the published research in mobile interaction does not quantify the influence of these variables. In this paper, we analyze the influence of walking speed, gait pattern and input techniques on commonly used performance parameters like error rate, accuracy and tapping speed, and we compare the results to the static condition. We examine the influence of these factors on the machine learned offset model used to correct user input and we make design recommendations. The results show that all performance parameters degraded when the subject started to move, for all input techniques. Index finger pointing techniques demonstrated overall better performance compared to thumb-pointing techniques. The influence of gait phase on tap event likelihood and accuracy was demonstrated for all input techniques and all walking speeds. Finally, it was shown that the offset model built on static data did not perform as well as models inferred from dynamic data, which indicates the speed-specific nature of the models. Also, models identified using specific input techniques did not perform well when tested in other conditions, demonstrating the limited validity of offset models to a particular input technique. The model was therefore calibrated using data recorded with the appropriate input technique, at 75% of preferred walking speed, which is the speed to which users spontaneously slow down when they use a mobile device and which presents a tradeoff between accuracy and usability. This led to an increase in accuracy compared to models built on static data. The error rate was reduced between 0.05% and 5.3% for landscape-based methods and between 5.3% and 11.9% for portrait-based methods

    Does emotion influence the use of auto-suggest during smartphone typing?

    Get PDF
    Typing based interfaces are common across many mobile applications, especially messaging apps. To reduce the difficulty of typing using keyboard applications on smartphones, smartwatches with restricted space, several techniques, such as auto-complete, auto-suggest, are implemented. Although helpful, these techniques do add more cognitive load on the user. Hence beyond the importance to improve the word recommendations, it is useful to understand the pattern of use of auto-suggestions during typing. Among several factors that may influence use of auto-suggest, the role of emotion has been mostly overlooked, often due to the difficulty of unobtrusively inferring emotion. With advances in affective computing, and ability to infer user's emotional states accurately, it is imperative to investigate how auto-suggest can be guided by emotion aware decisions. In this work, we investigate correlations between user emotion and usage of auto-suggest i.e. whether users prefer to use auto-suggest in specific emotion states. We developed an Android keyboard application, which records auto-suggest usage and collects emotion self-reports from users in a 3-week in-the-wild study. Analysis of the dataset reveals relationship between user reported emotion state and use of auto-suggest. We used the data to train personalized models for predicting use of auto-suggest in specific emotion state. The model can predict use of auto-suggest with an average accuracy (AUCROC) of 82% showing the feasibility of emotion-aware auto-suggestion

    Improving everyday computing tasks with head-mounted displays

    Get PDF
    The proliferation of consumer-affordable head-mounted displays (HMDs) has brought a rash of entertainment applications for this burgeoning technology, but relatively little research has been devoted to exploring its potential home and office productivity applications. Can the unique characteristics of HMDs be leveraged to improve users’ ability to perform everyday computing tasks? My work strives to explore this question. One significant obstacle to using HMDs for everyday tasks is the fact that the real world is occluded while wearing them. Physical keyboards remain the most performant devices for text input, yet using a physical keyboard is difficult when the user can’t see it. I developed a system for aiding users typing on physical keyboards while wearing HMDs and performed a user study demonstrating the efficacy of my system. Building on this foundation, I developed a window manager optimized for use with HMDs and conducted a user survey to gather feedback. This survey provided evidence that HMD-optimized window managers can provide advantages that are difficult or impossible to achieve with standard desktop monitors. Participants also provided suggestions for improvements and extensions to future versions of this window manager. I explored the issue of distance compression, wherein users tend to underestimate distances in virtual environments relative to the real world, which could be problematic for window managers or other productivity applications seeking to leverage the depth dimension through stereoscopy. I also investigated a mitigation technique for distance compression called minification. I conducted multiple user studies, providing evidence that minification makes users’ distance judgments in HMDs more accurate without causing detrimental perceptual side effects. This work also provided some valuable insight into the human perceptual system. Taken together, this work represents valuable steps toward leveraging HMDs for everyday home and office productivity applications. I developed functioning software for this purpose, demonstrated its efficacy through multiple user studies, and also gathered feedback for future directions by having participants use this software in simulated productivity tasks

    WiseType : a tablet keyboard with color-coded visualization and various editing options for error correction

    Get PDF
    To address the problem of improving text entry accuracy in mobile devices, we present a new tablet keyboard that offers both immediate and delayed feedback on language quality through auto-correction, prediction, and grammar checking. We combine different visual representations for grammar and spelling errors, accepted predictions, and auto-corrections, and also support interactive swiping/tapping features and improved interaction with previous errors, predictions, and auto-corrections. Additionally, we added smart error correction features to the system to decrease the overhead of correcting errors and to decrease the number of operations. We designed our new input method with an iterative user-centered approach through multiple pilots. We conducted a lab-based study with a refined experimental methodology and found that WiseType outperforms a standard keyboard in terms of text entry speed and error rate. The study shows that color-coded text background highlighting and underlining of potential mistakes in combination with fast correction methods can improve both writing speed and accuracy

    Human factors in instructional augmented reality for intravehicular spaceflight activities and How gravity influences the setup of interfaces operated by direct object selection

    Get PDF
    In human spaceflight, advanced user interfaces are becoming an interesting mean to facilitate human-machine interaction, enhancing and guaranteeing the sequences of intravehicular space operations. The efforts made to ease such operations have shown strong interests in novel human-computer interaction like Augmented Reality (AR). The work presented in this thesis is directed towards a user-driven design for AR-assisted space operations, iteratively solving issues arisen from the problem space, which also includes the consideration of the effect of altered gravity on handling such interfaces.Auch in der bemannten Raumfahrt steigt das Interesse an neuartigen Benutzerschnittstellen, um nicht nur die Mensch-Maschine-Interaktion effektiver zu gestalten, sondern auch um einen korrekten Arbeitsablauf sicherzustellen. In der Vergangenheit wurden wiederholt Anstrengungen unternommen, Innenbordarbeiten mit Hilfe von Augmented Reality (AR) zu erleichtern. Diese Arbeit konzentriert sich auf einen nutzerorientierten AR-Ansatz, welcher zum Ziel hat, die Probleme schrittweise in einem iterativen Designprozess zu lösen. Dies erfordert auch die Berücksichtigung veränderter Schwerkraftbedingungen

    Predicting and Reducing the Impact of Errors in Character-Based Text Entry

    Get PDF
    This dissertation focuses on the effect of errors in character-based text entry techniques. The effect of errors is targeted from theoretical, behavioral, and practical standpoints. This document starts with a review of the existing literature. It then presents results of a user study that investigated the effect of different error correction conditions on popular text entry performance metrics. Results showed that the way errors are handled has a significant effect on all frequently used error metrics. The outcomes also provided an understanding of how users notice and correct errors. Building on this, the dissertation then presents a new high-level and method-agnostic model for predicting the cost of error correction with a given text entry technique. Unlike the existing models, it accounts for both human and system factors and is general enough to be used with most character-based techniques. A user study verified the model through measuring the effects of a faulty keyboard on text entry performance. Subsequently, the work then explores the potential user adaptation to a gesture recognizer’s misrecognitions in two user studies. Results revealed that users gradually adapt to misrecognition errors by replacing the erroneous gestures with alternative ones, if available. Also, users adapt to a frequently misrecognized gesture faster if it occurs more frequently than the other error-prone gestures. Finally, this work presents a new hybrid approach to simulate pressure detection on standard touchscreens. The new approach combines the existing touch-point- and time-based methods. Results of two user studies showed that it can simulate pressure detection more reliably for at least two pressure levels: regular (~1 N) and extra (~3 N). Then, a new pressure-based text entry technique is presented that does not require tapping outside the virtual keyboard to reject an incorrect or unwanted prediction. Instead, the technique requires users to apply extra pressure for the tap on the next target key. The performance of the new technique was compared with the conventional technique in a user study. Results showed that for inputting short English phrases with 10% non-dictionary words, the new technique increases entry speed by 9% and decreases error rates by 25%. Also, most users (83%) favor the new technique over the conventional one. Together, the research presented in this dissertation gives more insight into on how errors affect text entry and also presents improved text entry methods

    Making Spatial Information Accessible on Touchscreens for Users who are Blind and Visually Impaired

    Get PDF
    Touchscreens have become a de facto standard of input for mobile devices as they most optimally use the limited input and output space that is imposed by their form factor. In recent years, people who are blind and visually impaired have been increasing their usage of smartphones and touchscreens. Although basic access is available, there are still many accessibility issues left to deal with in order to bring full inclusion to this population. One of the important challenges lies in accessing and creating of spatial information on touchscreens. The work presented here provides three new techniques, using three different modalities, for accessing spatial information on touchscreens. The first system makes geometry and diagram creation accessible on a touchscreen through the use of text-to-speech and gestural input. This first study is informed by a qualitative study of how people who are blind and visually impaired currently access and create graphs and diagrams. The second system makes directions through maps accessible using multiple vibration sensors without any sound or visual output. The third system investigates the use of binaural sound on a touchscreen to make various types of applications accessible such as physics simulations, astronomy, and video games

    Source Code Interaction on Touchscreens

    Get PDF
    Direct interaction with touchscreens has become a primary way of using a device. This work seeks to devise interaction methods for editing textual source code on touch-enabled devices. With the advent of the “Post-PC Era”, touch-centric interaction has received considerable attention in both research and development. However, various limitations have impeded widespread adoption of programming environments on modern platforms. Previous attempts have mainly been successful by simplifying or constraining conventional programming but have only insufficiently supported source code written in mainstream programming languages. This work includes the design, development, and evaluation of techniques for editing, selecting, and creating source code on touchscreens. The results contribute to text editing and entry methods by taking the syntax and structure of programming languages into account while exploiting the advantages of gesture-driven control. Furthermore, this work presents the design and software architecture of a mobile development environment incorporating touch-enabled modules for typical software development tasks
    • …
    corecore