247 research outputs found

    Mobile text entry behaviour in lab and in-the-wild studies : is it different?

    Get PDF
    Text entry in smartphones remains a critical element of mobile HCI. It has been widely studied in lab settings, using primarily transcription tasks, and to a far lesser extent through in-the-wild (field) experiments. So far it remains unknown how well user behaviour during lab transcription tasks approximates real use. In this paper, we present a study that provides evidence that lab text entry behaviour is clearly distinguishable from real world use. Using machine learning techniques, we show that it is possible to accurately identify the type of study in which text entry sessions took place. The implications of our findings relate to the design of future studies in text entry, aiming to support input with virtual smartphone keyboards

    How do people type on mobile devices? Observations from a study with 37,000 volunteers

    Get PDF
    © 2019 Association for Computing Machinery. This paper presents a large-scale dataset on mobile text entry collected via a web-based transcription task performed by 37,370 volunteers. The average typing speed was 36.2 WPM with 2.3% uncorrected errors. The scale of the data enables powerful statistical analyses on the correlation between typing performance and various factors, such as demographics, finger usage, and use of intelligent text entry techniques. We report effects of age and finger usage on performance that correspond to previous studies. We also find evidence of relationships between performance and use of intelligent text entry techniques: auto-correct usage correlates positively with entry rates, whereas word prediction usage has a negative correlation. To aid further work on modeling, machine learning and design improvements in mobile text entry, we make the code and dataset openly available

    WiseType : a tablet keyboard with color-coded visualization and various editing options for error correction

    Get PDF
    To address the problem of improving text entry accuracy in mobile devices, we present a new tablet keyboard that offers both immediate and delayed feedback on language quality through auto-correction, prediction, and grammar checking. We combine different visual representations for grammar and spelling errors, accepted predictions, and auto-corrections, and also support interactive swiping/tapping features and improved interaction with previous errors, predictions, and auto-corrections. Additionally, we added smart error correction features to the system to decrease the overhead of correcting errors and to decrease the number of operations. We designed our new input method with an iterative user-centered approach through multiple pilots. We conducted a lab-based study with a refined experimental methodology and found that WiseType outperforms a standard keyboard in terms of text entry speed and error rate. The study shows that color-coded text background highlighting and underlining of potential mistakes in combination with fast correction methods can improve both writing speed and accuracy

    Does emotion influence the use of auto-suggest during smartphone typing?

    Get PDF
    Typing based interfaces are common across many mobile applications, especially messaging apps. To reduce the difficulty of typing using keyboard applications on smartphones, smartwatches with restricted space, several techniques, such as auto-complete, auto-suggest, are implemented. Although helpful, these techniques do add more cognitive load on the user. Hence beyond the importance to improve the word recommendations, it is useful to understand the pattern of use of auto-suggestions during typing. Among several factors that may influence use of auto-suggest, the role of emotion has been mostly overlooked, often due to the difficulty of unobtrusively inferring emotion. With advances in affective computing, and ability to infer user's emotional states accurately, it is imperative to investigate how auto-suggest can be guided by emotion aware decisions. In this work, we investigate correlations between user emotion and usage of auto-suggest i.e. whether users prefer to use auto-suggest in specific emotion states. We developed an Android keyboard application, which records auto-suggest usage and collects emotion self-reports from users in a 3-week in-the-wild study. Analysis of the dataset reveals relationship between user reported emotion state and use of auto-suggest. We used the data to train personalized models for predicting use of auto-suggest in specific emotion state. The model can predict use of auto-suggest with an average accuracy (AUCROC) of 82% showing the feasibility of emotion-aware auto-suggestion

    Behaviour-aware mobile touch interfaces

    Get PDF
    Mobile touch devices have become ubiquitous everyday tools for communication, information, as well as capturing, storing and accessing personal data. They are often seen as personal devices, linked to individual users, who access the digital part of their daily lives via hand-held touchscreens. This personal use and the importance of the touch interface motivate the main assertion of this thesis: Mobile touch interaction can be improved by enabling user interfaces to assess and take into account how the user performs these interactions. This thesis introduces the new term "behaviour-aware" to characterise such interfaces. These behaviour-aware interfaces aim to improve interaction by utilising behaviour data: Since users perform touch interactions for their main tasks anyway, inferring extra information from said touches may, for example, save users' time and reduce distraction, compared to explicitly asking them for this information (e.g. user identity, hand posture, further context). Behaviour-aware user interfaces may utilise this information in different ways, in particular to adapt to users and contexts. Important questions for this research thus concern understanding behaviour details and influences, modelling said behaviour, and inference and (re)action integrated into the user interface. In several studies covering both analyses of basic touch behaviour and a set of specific prototype applications, this thesis addresses these questions and explores three application areas and goals: 1) Enhancing input capabilities – by modelling users' individual touch targeting behaviour to correct future touches and increase touch accuracy. The research reveals challenges and opportunities of behaviour variability arising from factors including target location, size and shape, hand and finger, stylus use, mobility, and device size. The work further informs modelling and inference based on targeting data, and presents approaches for simulating touch targeting behaviour and detecting behaviour changes. 2) Facilitating privacy and security – by observing touch targeting and typing behaviour patterns to implicitly verify user identity or distinguish multiple users during use. The research shows and addresses mobile-specific challenges, in particular changing hand postures. It also reveals that touch targeting characteristics provide useful biometric value both in the lab as well as in everyday typing. Influences of common evaluation assumptions are assessed and discussed as well. 3) Increasing expressiveness – by enabling interfaces to pass on behaviour variability from input to output space, studied with a keyboard that dynamically alters the font based on current typing behaviour. Results show that with these fonts users can distinguish basic contexts as well as individuals. They also explicitly control font influences for personal communication with creative effects. This thesis further contributes concepts and implemented tools for collecting touch behaviour data, analysing and modelling touch behaviour, and creating behaviour-aware and adaptive mobile touch interfaces. Together, these contributions support researchers and developers in investigating and building such user interfaces. Overall, this research shows how variability in mobile touch behaviour can be addressed and exploited for the benefit of the users. The thesis further discusses opportunities for transfer and reuse of touch behaviour models and information across applications and devices, for example to address tradeoffs of privacy/security and usability. Finally, the work concludes by reflecting on the general role of behaviour-aware user interfaces, proposing to view them as a way of embedding expectations about user input into interactive artefacts

    Making Spatial Information Accessible on Touchscreens for Users who are Blind and Visually Impaired

    Get PDF
    Touchscreens have become a de facto standard of input for mobile devices as they most optimally use the limited input and output space that is imposed by their form factor. In recent years, people who are blind and visually impaired have been increasing their usage of smartphones and touchscreens. Although basic access is available, there are still many accessibility issues left to deal with in order to bring full inclusion to this population. One of the important challenges lies in accessing and creating of spatial information on touchscreens. The work presented here provides three new techniques, using three different modalities, for accessing spatial information on touchscreens. The first system makes geometry and diagram creation accessible on a touchscreen through the use of text-to-speech and gestural input. This first study is informed by a qualitative study of how people who are blind and visually impaired currently access and create graphs and diagrams. The second system makes directions through maps accessible using multiple vibration sensors without any sound or visual output. The third system investigates the use of binaural sound on a touchscreen to make various types of applications accessible such as physics simulations, astronomy, and video games

    Spatial model personalization in Gboard

    Full text link
    We introduce a framework for adapting a virtual keyboard to individual user behavior by modifying a Gaussian spatial model to use personalized key center offset means and, optionally, learned covariances. Through numerous real-world studies, we determine the importance of training data quantity and weights, as well as the number of clusters into which to group keys to avoid overfitting. While past research has shown potential of this technique using artificially-simple virtual keyboards and games or fixed typing prompts, we demonstrate effectiveness using the highly-tuned Gboard app with a representative set of users and their real typing behaviors. Across a variety of top languages, we achieve small-but-significant improvements in both typing speed and decoder accuracy.Comment: 17 pages, to be published in the Proceedings of the 24th International Conference on Mobile Human-Computer Interaction (MobileHCI 2022

    Human-powered smartphone assistance for blind people

    Get PDF
    Mobile devices are fundamental tools for inclusion and independence. Yet, there are still many open research issues in smartphone accessibility for blind people (Grussenmeyer and Folmer 2017). Currently, learning how to use a smartphone is non-trivial, especially when we consider that the need to learn new apps and accommodate to updates never ceases. When first transitioning from a basic feature-phone, people have to adapt to new paradigms of interaction. Where feature phones had a finite set of applications and functions, users can extend the possible functions and uses of a smartphone by installing new 3rd party applications. Moreover, the interconnectivity of these applications means that users can explore a seemingly endless set of workflows across applications. To that end, the fragmented nature of development on these devices results in users needing to create different mental models for each application. These characteristics make smartphone adoption a demanding task, as we found from our eight-week longitudinal study on smartphone adoption by blind people. We conducted multiple studies to characterize the smartphone challenges that blind people face, and found people often require synchronous, co-located assistance from family, peers, friends, and even strangers to overcome the different barriers they face. However, help is not always available, especially when we consider the disparity in each barrier, individual support network and current location. In this dissertation we investigated if and how in-context human-powered solutions can be leveraged to improve current smartphone accessibility and ease of use. Building on a comprehensive knowledge of the smartphone challenges faced and coping mechanisms employed by blind people, we explored how human-powered assistive technologies can facilitate use. The thesis of this dissertation is: Human-powered smartphone assistance by non-experts is effective and impacts perceptions of self-efficacy
    • …
    corecore