26 research outputs found

    FlexType: Flexible Text Input with a Small Set of Input Gestures

    Get PDF
    In many situations, it may be impractical or impossible to enter text by selecting precise locations on a physical or touchscreen keyboard. We present an ambiguous keyboard with four character groups that has potential applications for eyes-free text entry, as well as text entry using a single switch or a brain-computer interface. We develop a procedure for optimizing these character groupings based on a disambiguation algorithm that leverages a long-span language model. We produce both alphabetically-constrained and unconstrained character groups in an offline optimization experiment and compare them in a longitudinal user study. Our results did not show a significant difference between the constrained and unconstrained character groups after four hours of practice. As expected, participants had significantly more errors with the unconstrained groups in the first session, suggesting a higher barrier to learning the technique. We therefore recommend the alphabetically-constrained character groups, where participants were able to achieve an average entry rate of 12.0 words per minute with a 2.03% character error rate using a single hand and with no visual feedback

    Towards Location-Independent Eyes-Free Text Entry

    Get PDF
    We propose an interface for eyes-free text entry using an ambiguous technique and conduct a preliminary user study. We find that user are able to enter text at 19.09 words per minute (WPM) with a 2.08% character error rate (CER) after eight hours of practice. We explore ways to optimize the ambiguous groupings to reduce the number of disambiguation errors, both with and without familiarity constraints. We find that it is feasible to reduce the number of ambiguous groups from six to four. Finally, we explore a technique for presenting word suggestions to users using simultaneous audio feedback. We find that accuracy is quite poor when the words are played fully simultaneously, but improves when a slight delay is added before each voice

    AUGMENTED TOUCH INTERACTIONS WITH FINGER CONTACT SHAPE AND ORIENTATION

    Get PDF
    Touchscreen interactions are far less expressive than the range of touch that human hands are capable of - even considering technologies such as multi-touch and force-sensitive surfaces. Recently, some touchscreens have added the capability to sense the actual contact area of a finger on the touch surface, which provides additional degrees of freedom - the size and shape of the touch, and the finger's orientation. These additional sensory capabilities hold promise for increasing the expressiveness of touch interactions - but little is known about whether users can successfully use the new degrees of freedom. To provide this baseline information, we carried out a study with a finger-contact-sensing touchscreen, and asked participants to produce a range of touches and gestures with different shapes and orientations, with both one and two fingers. We found that people are able to reliably produce two touch shapes and three orientations across a wide range of touches and gestures - a result that was confirmed in another study that used the augmented touches for a screen lock application

    ユーザ適応型ソフトウェアキーボードに関する研究

    Get PDF
    筑波大学 (University of Tsukuba)201

    Braille text entry on smartwatches : an evaluation of methods for composing the Braille cell

    Get PDF
    Smartwatches are gaining popularity on market with a set of features comparable to smartphones in a wearable device. This novice technology brings new interaction paradigms and challenges for blind users, who have difficulties dealing with touchscreens. Among a variety of tasks that must be studied, text entry is analyzed, considering that current existing solutions may be unsatisfactory (as voice input) or even unfeasible (as working with tiny QWERTY keyboards) for a blind user. More specifically, this paper presents a study on possible solutions for composing a Braille cell on smart-watches. Five prototypes were developed and different feedback features were proposed. These are confronted with seven specialists on an evaluation study that results in a qualitative analysis of which strategies can be more useful for blind users in a Braille text entry.Postprin

    Human-powered smartphone assistance for blind people

    Get PDF
    Mobile devices are fundamental tools for inclusion and independence. Yet, there are still many open research issues in smartphone accessibility for blind people (Grussenmeyer and Folmer 2017). Currently, learning how to use a smartphone is non-trivial, especially when we consider that the need to learn new apps and accommodate to updates never ceases. When first transitioning from a basic feature-phone, people have to adapt to new paradigms of interaction. Where feature phones had a finite set of applications and functions, users can extend the possible functions and uses of a smartphone by installing new 3rd party applications. Moreover, the interconnectivity of these applications means that users can explore a seemingly endless set of workflows across applications. To that end, the fragmented nature of development on these devices results in users needing to create different mental models for each application. These characteristics make smartphone adoption a demanding task, as we found from our eight-week longitudinal study on smartphone adoption by blind people. We conducted multiple studies to characterize the smartphone challenges that blind people face, and found people often require synchronous, co-located assistance from family, peers, friends, and even strangers to overcome the different barriers they face. However, help is not always available, especially when we consider the disparity in each barrier, individual support network and current location. In this dissertation we investigated if and how in-context human-powered solutions can be leveraged to improve current smartphone accessibility and ease of use. Building on a comprehensive knowledge of the smartphone challenges faced and coping mechanisms employed by blind people, we explored how human-powered assistive technologies can facilitate use. The thesis of this dissertation is: Human-powered smartphone assistance by non-experts is effective and impacts perceptions of self-efficacy

    Ubiquitous text interaction

    Get PDF
    Computer-based interactions increasingly pervade our everyday environments. Be it on a mobile device, a wearable device, a wall-sized display, or an augmented reality device, interactive systems often rely on the consumption, composition, and manipulation of text. The focus of this workshop is on exploring the problems and opportunities of text interactions that are embedded in our environments, available all the time, and used by people who may be constrained by device, situation, or disability. This workshop welcomes all researchers interested in interactive systems that rely on text input or output. Participants should submit a short position statement outlining their background, past work, future plans, and suggesting a use-case they would like to explore in-depth during the workshop. During the workshop, small teams will form around common or compelling use-cases. Teams will spend time brainstorming, creating low-fidelity prototypes, and discussing their use-case with the group. Participants may optionally submit a technical paper for presentation as part of the workshop program. The workshop serves to sustain and build the community of text entry researchers who attend CHI. It provides an opportunity for new members to join this community, soliciting feedback from experts in a small and supportive environment
    corecore