455 research outputs found

    Development of a typing behaviour recognition mechanism on Android

    Get PDF
    This paper proposes a biometric authentication system which use password based and behavioural traits (typing behaviours) authentication technology to establish user’s identity on a mobile phone. The proposed system can work on the latest smart phone platform. It uses mobile devices to capture user’s keystroke data and transmit it to web server. The authentication engine will establish if a user is genuine or fraudulent. In addition, a multiplier of the standard deviation “α” has been defined which aims to achieve the balance between security and usability. Experimental results indicate that the developed authentication system is highly reliable and very secure with an equal error rate is below 7.5%

    Mobile text entry behaviour in lab and in-the-wild studies : is it different?

    Get PDF
    Text entry in smartphones remains a critical element of mobile HCI. It has been widely studied in lab settings, using primarily transcription tasks, and to a far lesser extent through in-the-wild (field) experiments. So far it remains unknown how well user behaviour during lab transcription tasks approximates real use. In this paper, we present a study that provides evidence that lab text entry behaviour is clearly distinguishable from real world use. Using machine learning techniques, we show that it is possible to accurately identify the type of study in which text entry sessions took place. The implications of our findings relate to the design of future studies in text entry, aiming to support input with virtual smartphone keyboards

    Investigating error injection to enhance the effectiveness of mobile text entry studies of error behaviour

    Get PDF
    During lab studies of text entry methods it is typical to observer very few errors in participants' typing - users tend to type very carefully in labs. This is a problem when investigating methods to support error awareness or correction as support mechanisms are not tested. We designed a novel evaluation method based around injection of errors into the users' typing stream and report two user studies on the effectiveness of this technique. Injection allowed us to observe a larger number of instances and more diverse types of error correction behaviour than would normally be possible in a single study, without having a significant impact on key input behaviour characteristics. Qualitative feedback from both studies suggests that our injection algorithm was successful in creating errors that appeared realistic to participants. The use of error injection shows promise for the investigation of error correction behaviour in text entry studies

    Predicting and Reducing the Impact of Errors in Character-Based Text Entry

    Get PDF
    This dissertation focuses on the effect of errors in character-based text entry techniques. The effect of errors is targeted from theoretical, behavioral, and practical standpoints. This document starts with a review of the existing literature. It then presents results of a user study that investigated the effect of different error correction conditions on popular text entry performance metrics. Results showed that the way errors are handled has a significant effect on all frequently used error metrics. The outcomes also provided an understanding of how users notice and correct errors. Building on this, the dissertation then presents a new high-level and method-agnostic model for predicting the cost of error correction with a given text entry technique. Unlike the existing models, it accounts for both human and system factors and is general enough to be used with most character-based techniques. A user study verified the model through measuring the effects of a faulty keyboard on text entry performance. Subsequently, the work then explores the potential user adaptation to a gesture recognizer’s misrecognitions in two user studies. Results revealed that users gradually adapt to misrecognition errors by replacing the erroneous gestures with alternative ones, if available. Also, users adapt to a frequently misrecognized gesture faster if it occurs more frequently than the other error-prone gestures. Finally, this work presents a new hybrid approach to simulate pressure detection on standard touchscreens. The new approach combines the existing touch-point- and time-based methods. Results of two user studies showed that it can simulate pressure detection more reliably for at least two pressure levels: regular (~1 N) and extra (~3 N). Then, a new pressure-based text entry technique is presented that does not require tapping outside the virtual keyboard to reject an incorrect or unwanted prediction. Instead, the technique requires users to apply extra pressure for the tap on the next target key. The performance of the new technique was compared with the conventional technique in a user study. Results showed that for inputting short English phrases with 10% non-dictionary words, the new technique increases entry speed by 9% and decreases error rates by 25%. Also, most users (83%) favor the new technique over the conventional one. Together, the research presented in this dissertation gives more insight into on how errors affect text entry and also presents improved text entry methods

    Spatial model personalization in Gboard

    Full text link
    We introduce a framework for adapting a virtual keyboard to individual user behavior by modifying a Gaussian spatial model to use personalized key center offset means and, optionally, learned covariances. Through numerous real-world studies, we determine the importance of training data quantity and weights, as well as the number of clusters into which to group keys to avoid overfitting. While past research has shown potential of this technique using artificially-simple virtual keyboards and games or fixed typing prompts, we demonstrate effectiveness using the highly-tuned Gboard app with a representative set of users and their real typing behaviors. Across a variety of top languages, we achieve small-but-significant improvements in both typing speed and decoder accuracy.Comment: 17 pages, to be published in the Proceedings of the 24th International Conference on Mobile Human-Computer Interaction (MobileHCI 2022

    Improving the Accuracy of Mobile Touchscreen QWERTY Keyboards

    Get PDF
    In this thesis we explore alternative keyboard layouts in hopes of finding one that increases the accuracy of text input on mobile touchscreen devices. In particular, we investigate if a single swap of 2 keys can significantly improve accuracy on mobile touchscreen QWERTY keyboards. We do so by carefully considering the placement of keys, exploiting a specific vulnerability that occurs within a keyboard layout, namely, that the placement of particular keys next to others may be increasing errors when typing. We simulate the act of typing on a mobile touchscreen QWERTY keyboard, beginning with modeling the typographical errors that can occur when doing so. We then construct a simple autocorrector using Bayesian methods, describing how we can autocorrect user input and evaluate the ability of the keyboard to output the correct text. Then, using our models, we provide methods of testing and define a metric, the WAR rating, which provides us a way of comparing the accuracy of a keyboard layout. After running our tests on all 325 2-key swap layouts against the original QWERTY layout, we show that there exists more than one 2-key swap that increases the accuracy of the current QWERTY layout, and that the best 2-key swap is i ↔ t, increasing accuracy by nearly 0.18 percent

    A glimpse of mobile text entry errors and corrective behaviour in the wild

    Get PDF
    Research in mobile text entry has long focused on speed and input errors during lab studies. However, little is known about how input errors emerge in real-world situations or how users deal with these. We present findings from an in-the-wild study of everyday text entry and discuss their implications for future studies

    Text Entry Performance and Situation Awareness of a Joint Optical See-Through Head-Mounted Display and Smartphone System

    Full text link
    Optical see-through head-mounted displays (OST HMDs) are a popular output medium for mobile Augmented Reality (AR) applications. To date, they lack efficient text entry techniques. Smartphones are a major text entry medium in mobile contexts but attentional demands can contribute to accidents while typing on the go. Mobile multi-display ecologies, such as combined OST HMD-smartphone systems, promise performance and situation awareness benefits over single-device use. We study the joint performance of text entry on mobile phones with text output on optical see-through head-mounted displays. A series of five experiments with a total of 86 participants indicate that, as of today, the challenges in such a joint interactive system outweigh the potential benefits.Comment: To appear in IEEE Transactions on Visualization and Computer Graphics On page(s): 1-17 Print ISSN: 1077-2626 Online ISSN: 1077-262
    • …
    corecore