1,607 research outputs found

    Predicting and Reducing the Impact of Errors in Character-Based Text Entry

    Get PDF
    This dissertation focuses on the effect of errors in character-based text entry techniques. The effect of errors is targeted from theoretical, behavioral, and practical standpoints. This document starts with a review of the existing literature. It then presents results of a user study that investigated the effect of different error correction conditions on popular text entry performance metrics. Results showed that the way errors are handled has a significant effect on all frequently used error metrics. The outcomes also provided an understanding of how users notice and correct errors. Building on this, the dissertation then presents a new high-level and method-agnostic model for predicting the cost of error correction with a given text entry technique. Unlike the existing models, it accounts for both human and system factors and is general enough to be used with most character-based techniques. A user study verified the model through measuring the effects of a faulty keyboard on text entry performance. Subsequently, the work then explores the potential user adaptation to a gesture recognizer’s misrecognitions in two user studies. Results revealed that users gradually adapt to misrecognition errors by replacing the erroneous gestures with alternative ones, if available. Also, users adapt to a frequently misrecognized gesture faster if it occurs more frequently than the other error-prone gestures. Finally, this work presents a new hybrid approach to simulate pressure detection on standard touchscreens. The new approach combines the existing touch-point- and time-based methods. Results of two user studies showed that it can simulate pressure detection more reliably for at least two pressure levels: regular (~1 N) and extra (~3 N). Then, a new pressure-based text entry technique is presented that does not require tapping outside the virtual keyboard to reject an incorrect or unwanted prediction. Instead, the technique requires users to apply extra pressure for the tap on the next target key. The performance of the new technique was compared with the conventional technique in a user study. Results showed that for inputting short English phrases with 10% non-dictionary words, the new technique increases entry speed by 9% and decreases error rates by 25%. Also, most users (83%) favor the new technique over the conventional one. Together, the research presented in this dissertation gives more insight into on how errors affect text entry and also presents improved text entry methods

    A practical review of energy saving technology for ageing populations

    Get PDF
    Fuel poverty is a critical issue for a globally ageing population. Longer heating/cooling requirements combine with declining incomes to create a problem in need of urgent attention. One solution is to deploy technology to help elderly users feel informed about their energy use, and empowered to take steps to make it more cost effective and efficient. This study subjects a broad cross section of energy monitoring and home automation products to a formal ergonomic analysis. A high level task analysis was used to guide a product walk through, and a toolkit approach was used thereafter to drive out further insights. The findings reveal a number of serious usability issues which prevent these products from successfully accessing an important target demographic and associated energy saving and fuel poverty outcomes. Design principles and examples are distilled from the research to enable practitioners to translate the underlying research into high quality design-engineering solutions

    Empowering LLM to use Smartphone for Intelligent Task Automation

    Full text link
    Mobile task automation is an attractive technique that aims to enable voice-based hands-free user interaction with smartphones. However, existing approaches suffer from poor scalability due to the limited language understanding ability and the non-trivial manual efforts required from developers or end-users. The recent advance of large language models (LLMs) in language understanding and reasoning inspires us to rethink the problem from a model-centric perspective, where task preparation, comprehension, and execution are handled by a unified language model. In this work, we introduce AutoDroid, a mobile task automation system that can handle arbitrary tasks on any Android application without manual efforts. The key insight is to combine the commonsense knowledge of LLMs and domain-specific knowledge of apps through automated dynamic analysis. The main components include a functionality-aware UI representation method that bridges the UI with the LLM, exploration-based memory injection techniques that augment the app-specific domain knowledge of LLM, and a multi-granularity query optimization module that reduces the cost of model inference. We integrate AutoDroid with off-the-shelf LLMs including online GPT-4/GPT-3.5 and on-device Vicuna, and evaluate its performance on a new benchmark for memory-augmented Android task automation with 158 common tasks. The results demonstrated that AutoDroid is able to precisely generate actions with an accuracy of 90.9%, and complete tasks with a success rate of 71.3%, outperforming the GPT-4-powered baselines by 36.4% and 39.7%. The demo, benchmark suites, and source code of AutoDroid will be released at url{https://autodroid-sys.github.io/}

    Enhancing Usability and Security through Alternative Authentication Methods

    Get PDF
    With the expanding popularity of various Internet services, online users have be- come more vulnerable to malicious attacks as more of their private information is accessible on the Internet. The primary defense protecting private information is user authentication, which currently relies on less than ideal methods such as text passwords and PIN numbers. Alternative methods such as graphical passwords and behavioral biometrics have been proposed, but with too many limitations to replace current methods. However, with enhancements to overcome these limitations and harden existing methods, alternative authentications may become viable for future use. This dissertation aims to enhance the viability of alternative authentication systems. In particular, our research focuses on graphical passwords, biometrics that depend, directly or indirectly, on anthropometric data, and user authentication en- hancements using touch screen features on mobile devices. In the study of graphical passwords, we develop a new cued-recall graphical pass- word system called GridMap by exploring (1) the use of grids with variable input entered through the keyboard, and (2) the use of maps as background images. as a result, GridMap is able to achieve high key space and resistance to shoulder surfing attacks. to validate the efficacy of GridMap in practice, we conduct a user study with 50 participants. Our experimental results show that GridMap works well in domains in which a user logs in on a regular basis, and provides a memorability benefit if the chosen map has a personal significance to the user. In the study of anthropometric based biometrics through the use of mouse dy- namics, we present a method for choosing metrics based on empirical evidence of natural difference in the genders. In particular, we develop a novel gender classifi- cation model and evaluate the model’s accuracy based on the data collected from a group of 94 users. Temporal, spatial, and accuracy metrics are recorded from kine- matic and spatial analyses of 256 mouse movements performed by each user. The effectiveness of our model is validated through the use of binary logistic regressions. Finally, we propose enhanced authentication schemes through redesigned input, along with the use of anthropometric biometrics on mobile devices. We design a novel scheme called Triple Touch PIN (TTP) that improves traditional PIN number based authentication with highly enlarged keyspace. We evaluate TTP on a group of 25 participants. Our evaluation results show that TTP is robust against dictio- nary attacks and achieves usability at acceptable levels for users. We also assess anthropometric based biometrics by attempting to differentiate user fingers through the readings of the sensors in the touch screen. We validate the viability of this biometric approach on 33 users, and observe that it is feasible for distinguishing the fingers with the largest anthropometric differences, the thumb and pinkie fingers
    corecore