654 research outputs found

    FlexType: Flexible Text Input with a Small Set of Input Gestures

    Get PDF
    In many situations, it may be impractical or impossible to enter text by selecting precise locations on a physical or touchscreen keyboard. We present an ambiguous keyboard with four character groups that has potential applications for eyes-free text entry, as well as text entry using a single switch or a brain-computer interface. We develop a procedure for optimizing these character groupings based on a disambiguation algorithm that leverages a long-span language model. We produce both alphabetically-constrained and unconstrained character groups in an offline optimization experiment and compare them in a longitudinal user study. Our results did not show a significant difference between the constrained and unconstrained character groups after four hours of practice. As expected, participants had significantly more errors with the unconstrained groups in the first session, suggesting a higher barrier to learning the technique. We therefore recommend the alphabetically-constrained character groups, where participants were able to achieve an average entry rate of 12.0 words per minute with a 2.03% character error rate using a single hand and with no visual feedback

    WearPut : Designing Dexterous Wearable Input based on the Characteristics of Human Finger Motions

    Get PDF
    Department of Biomedical Engineering (Human Factors Engineering)Powerful microchips for computing and networking allow a wide range of wearable devices to be miniaturized with high fidelity and availability. In particular, the commercially successful smartwatches placed on the wrist drive market growth by sharing the role of smartphones and health management. The emerging Head Mounted Displays (HMDs) for Augmented Reality (AR) and Virtual Reality (VR) also impact various application areas in video games, education, simulation, and productivity tools. However, these powerful wearables have challenges in interaction with the inevitably limited space for input and output due to the specialized form factors for fitting the body parts. To complement the constrained interaction experience, many wearable devices still rely on other large form factor devices (e.g., smartphones or hand-held controllers). Despite their usefulness, the additional devices for interaction can constrain the viability of wearable devices in many usage scenarios by tethering users' hands to the physical devices. This thesis argues that developing novel Human-Computer interaction techniques for the specialized wearable form factors is vital for wearables to be reliable standalone products. This thesis seeks to address the issue of constrained interaction experience with novel interaction techniques by exploring finger motions during input for the specialized form factors of wearable devices. The several characteristics of the finger input motions are promising to enable increases in the expressiveness of input on the physically limited input space of wearable devices. First, the input techniques with fingers are prevalent on many large form factor devices (e.g., touchscreen or physical keyboard) due to fast and accurate performance and high familiarity. Second, many commercial wearable products provide built-in sensors (e.g., touchscreen or hand tracking system) to detect finger motions. This enables the implementation of novel interaction systems without any additional sensors or devices. Third, the specialized form factors of wearable devices can create unique input contexts while the fingers approach their locations, shapes, and components. Finally, the dexterity of fingers with a distinctive appearance, high degrees of freedom, and high sensitivity of joint angle perception have the potential to widen the range of input available with various movement features on the surface and in the air. Accordingly, the general claim of this thesis is that understanding how users move their fingers during input will enable increases in the expressiveness of the interaction techniques we can create for resource-limited wearable devices. This thesis demonstrates the general claim by providing evidence in various wearable scenarios with smartwatches and HMDs. First, this thesis explored the comfort range of static and dynamic touch input with angles on the touchscreen of smartwatches. The results showed the specific comfort ranges on variations in fingers, finger regions, and poses due to the unique input context that the touching hand approaches a small and fixed touchscreen with a limited range of angles. Then, finger region-aware systems that recognize the flat and side of the finger were constructed based on the contact areas on the touchscreen to enhance the expressiveness of angle-based touch input. In the second scenario, this thesis revealed distinctive touch profiles of different fingers caused by the unique input context for the touchscreen of smartwatches. The results led to the implementation of finger identification systems for distinguishing two or three fingers. Two virtual keyboards with 12 and 16 keys showed the feasibility of touch-based finger identification that enables increases in the expressiveness of touch input techniques. In addition, this thesis supports the general claim with a range of wearable scenarios by exploring the finger input motions in the air. In the third scenario, this thesis investigated the motions of in-air finger stroking during unconstrained in-air typing for HMDs. The results of the observation study revealed details of in-air finger motions during fast sequential input, such as strategies, kinematics, correlated movements, inter-fingerstroke relationship, and individual in-air keys. The in-depth analysis led to a practical guideline for developing robust in-air typing systems with finger stroking. Lastly, this thesis examined the viable locations of in-air thumb touch input to the virtual targets above the palm. It was confirmed that fast and accurate sequential thumb touch can be achieved at a total of 8 key locations with the built-in hand tracking system in a commercial HMD. Final typing studies with a novel in-air thumb typing system verified increases in the expressiveness of virtual target selection on HMDs. This thesis argues that the objective and subjective results and novel interaction techniques in various wearable scenarios support the general claim that understanding how users move their fingers during input will enable increases in the expressiveness of the interaction techniques we can create for resource-limited wearable devices. Finally, this thesis concludes with thesis contributions, design considerations, and the scope of future research works, for future researchers and developers to implement robust finger-based interaction systems on various types of wearable devices.ope

    An Ambiguous Technique for Nonvisual Text Entry

    Get PDF
    Text entry is a common daily task for many people, but it can be a challenge for people with visual impairments when using virtual touchscreen keyboards that lack physical key boundaries. In this thesis, we investigate using a small number of gestures to select from groups of characters to remove most or all dependence on touch locations. We leverage a predictive language model to select the most likely characters from the selected groups once a user completes each word. Using a preliminary interface with six groups of characters based on a Qwerty keyboard, we find that users are able to enter text with no visual feedback at 19.1 words per minute (WPM) with a 2.1% character error rate (CER) after five hours of practice. We explore ways to optimize the ambiguous groups to reduce the number of disambiguation errors. We develop a novel interface named FlexType with four character groups instead of six in order to remove all remaining location dependence and enable one-handed input. We compare optimized groups with and without constraining the group assignments to alphabetical order in a user study. We find that users enter text with no visual feedback at 12.0 WPM with a 2.0% CER using the constrained groups after four hours of practice. There was no significant difference from the unconstrained groups. We improve FlexType based on user feedback and tune the recognition algorithm parameters based on the study data. We conduct an interview study with 12 blind users to assess the challenges they encounter while entering text and solicit feedback on FlexType, and we further incorporate this feedback into the interface. We evaluate the improved interface in a longitudinal study with 12 blind participants. On average, participants entered text at 8.2 words per minute using FlexType, 7.5 words per minute using a Qwerty keyboard with VoiceOver, and at 26.9 words per minute using Braille Screen Input

    AUGMENTED TOUCH INTERACTIONS WITH FINGER CONTACT SHAPE AND ORIENTATION

    Get PDF
    Touchscreen interactions are far less expressive than the range of touch that human hands are capable of - even considering technologies such as multi-touch and force-sensitive surfaces. Recently, some touchscreens have added the capability to sense the actual contact area of a finger on the touch surface, which provides additional degrees of freedom - the size and shape of the touch, and the finger's orientation. These additional sensory capabilities hold promise for increasing the expressiveness of touch interactions - but little is known about whether users can successfully use the new degrees of freedom. To provide this baseline information, we carried out a study with a finger-contact-sensing touchscreen, and asked participants to produce a range of touches and gestures with different shapes and orientations, with both one and two fingers. We found that people are able to reliably produce two touch shapes and three orientations across a wide range of touches and gestures - a result that was confirmed in another study that used the augmented touches for a screen lock application

    Tablet Applications for the Elderly: Specific Usability Guidelines

    Get PDF
    While the world population is aging, the technological progress is steadily increasing. Smartphones and tablets belong to a growing market and even more people aged 65 and above are using such touch devices. However, with advancing age normal cognitive, sensory, perceptual and motor changes influence psychological and physical capabilities and therefore the way the elderly are able to use tablet-applications. When designing tablet-applications for the elderly developers have to be supported in understanding these capabilities. Therefore, this thesis provides a comprehensive compilation of usability guidelines in order to develop user-friendly tablet-applications for older people. The development and testing of an exemplary tablet-application within this thesis shows how these guidelines can be brought into practice and how this realization is evaluated by test persons in this age group

    TapGazer:Text Entry with finger tapping and gaze-directed word selection

    Get PDF
    While using VR, efficient text entry is a challenge: users cannot easily locate standard physical keyboards, and keys are often out of reach, e.g. when standing. We present TapGazer, a text entry system where users type by tapping their fingers in place. Users can tap anywhere as long as the identity of each tapping finger can be detected with sensors. Ambiguity between different possible input words is resolved by selecting target words with gaze. If gaze tracking is unavailable, ambiguity is resolved by selecting target words with additional taps. We evaluated TapGazer for seated and standing VR: seated novice users using touchpads as tap surfaces reached 44.81 words per minute (WPM), 79.17% of their QWERTY typing speed. Standing novice users tapped on their thighs with touch-sensitive gloves, reaching 45.26 WPM (71.91%). We analyze TapGazer with a theoretical performance model and discuss its potential for text input in future AR scenarios.</p

    Source Code Interaction on Touchscreens

    Get PDF
    Direct interaction with touchscreens has become a primary way of using a device. This work seeks to devise interaction methods for editing textual source code on touch-enabled devices. With the advent of the “Post-PC Era”, touch-centric interaction has received considerable attention in both research and development. However, various limitations have impeded widespread adoption of programming environments on modern platforms. Previous attempts have mainly been successful by simplifying or constraining conventional programming but have only insufficiently supported source code written in mainstream programming languages. This work includes the design, development, and evaluation of techniques for editing, selecting, and creating source code on touchscreens. The results contribute to text editing and entry methods by taking the syntax and structure of programming languages into account while exploiting the advantages of gesture-driven control. Furthermore, this work presents the design and software architecture of a mobile development environment incorporating touch-enabled modules for typical software development tasks

    Designing Wearable Personal Assistants for Surgeons: An Egocentric Approach

    Get PDF

    Improving the Security of Mobile Devices Through Multi-Dimensional and Analog Authentication

    Get PDF
    Mobile devices are ubiquitous in today\u27s society, and the usage of these devices for secure tasks like corporate email, banking, and stock trading grows by the day. The first, and often only, defense against attackers who get physical access to the device is the lock screen: the authentication task required to gain access to the device. To date mobile devices have languished under insecure authentication scheme offerings like PINs, Pattern Unlock, and biometrics-- or slow offerings like alphanumeric passwords. This work addresses the design and creation of five proof-of-concept authentication schemes that seek to increase the security of mobile authentication without compromising memorability or usability. These proof-of-concept schemes demonstrate the concept of Multi-Dimensional Authentication, a method of using data from unrelated dimensions of information, and the concept of Analog Authentication, a method utilizing continuous rather than discrete information. Security analysis will show that these schemes can be designed to exceed the security strength of alphanumeric passwords, resist shoulder-surfing in all but the worst-case scenarios, and offer significantly fewer hotspots than existing approaches. Usability analysis, including data collected from user studies in each of the five schemes, will show promising results for entry times, in some cases on-par with existing PIN or Pattern Unlock approaches, and comparable qualitative ratings with existing approaches. Memorability results will demonstrate that the psychological advantages utilized by these schemes can lead to real-world improvements in recall, in some instances leading to near-perfect recall after two weeks, significantly exceeding the recall rates of similarly secure alphanumeric passwords
    corecore