39 research outputs found

    Typing Efficiency and Suggestion Accuracy Influence the Benefits and Adoption of Word Suggestions

    Get PDF
    International audienceSuggesting words to complete a given sequence of characters isa common feature of typing interfaces. Yet, previous studies havenot found a clear benefit, some even finding it detrimental. Wereport on the first study to control for two important factors, wordsuggestion accuracy and typing efficiency. Our accuracy factor isenabled by a new methodology that builds on standard metrics ofword suggestions. Typing efficiency is based on device type. Resultsshow word suggestions are used less often in a desktop condition,with little difference between tablet and phone conditions. Veryaccurate suggestions do not improve entry speed on desktop, but doon tablet and phone. Based on our findings, we discuss implicationsfor the design of automation features in typing systems

    The Effect of Device When Using Smartphones and Computers to Answer Multiple-Choice and Open-Response Questions in Distance Education

    Get PDF
    Traditionally in higher education, online courses have been designed for computer users. However, the advent of mobile learning (m-learning) and the proliferation of smartphones have created two challenges for online students and instructional designers. First, instruction designed for a larger computer screen often loses its effectiveness when displayed on a smaller smartphone screen. Second, requiring students to write remains a hallmark of higher education, but miniature keyboards might restrict how thoroughly smartphone users respond to open- response test questions. The present study addressed both challenges by featuring m-learning’s greatest strength (multimedia) and by investigating its greatest weakness (text input). The purpose of the current study was to extend previous research associated with m- learning. The first goal was to determine the effect of device (computer vs. smartphone) on performance when answering multiple-choice and open-response questions. The second goal was to determine whether computers and smartphones would receive significantly different usability ratings when used by participants to answer multiple-choice and open-response questions. The construct of usability was defined as a composite score based on ratings of effectiveness, efficiency, and satisfaction. This comparative study used a between-subjects, posttest, experimental design. The study randomly assigned 70 adults to either the computer treatment group or the smartphone treatment group. Both treatment groups received the same narrated multimedia lesson on how a solar cell works. Participants accessed the lesson using either their personal computers (computer treatment group) or their personal smartphones (smartphone treatment group) at the time and location of their choice. After viewing the multimedia lesson, all participants answered the same multiple-choice and open-response posttest questions. In the current study, computer users and smartphone users had no significant difference in their scores on multiple-choice recall questions. On open-response questions, smartphone users performed better than predicted, which resulted in no significant difference between scores of the two treatment groups. Regarding usability, participants gave computers and smartphones high usability ratings when answering multiple-choice items. However, for answering open-response items, smartphones received significantly lower usability ratings than computers

    Optimizing Human Performance in Mobile Text Entry

    Get PDF
    Although text entry on mobile phones is abundant, research strives to achieve desktop typing performance "on the go". But how can researchers evaluate new and existing mobile text entry techniques? How can they ensure that evaluations are conducted in a consistent manner that facilitates comparison? What forms of input are possible on a mobile device? Do the audio and haptic feedback options with most touchscreen keyboards affect performance? What influences users' preference for one feedback or another? Can rearranging the characters and keys of a keyboard improve performance? This dissertation answers these questions and more. The developed TEMA software allows researchers to evaluate mobile text entry methods in an easy, detailed, and consistent manner. Many in academia and industry have adopted it. TEMA was used to evaluate a typical QWERTY keyboard with multiple options for audio and haptic feedback. Though feedback did not have a significant effect on performance, a survey revealed that users' choice of feedback is influenced by social and technical factors. Another study using TEMA showed that novice users entered text faster using a tapping technique than with a gesture or handwriting technique. This motivated rearranging the keys and characters to create a new keyboard, MIME, that would provide better performance for expert users. Data on character frequency and key selection times were gathered and used to design MIME. A longitudinal user study using TEMA revealed an entry speed of 17 wpm and a total error rate of 1.7% for MIME, compared to 23 wpm and 5.2% for QWERTY. Although MIME's entry speed did not surpass QWERTY's during the study, it is projected to do so after twelve hours of practice. MIME's error rate was consistently low and significantly lower than QWERTY's. In addition, participants found MIME more comfortable to use, with some reporting hand soreness after using QWERTY for extended periods

    Predicting and Reducing the Impact of Errors in Character-Based Text Entry

    Get PDF
    This dissertation focuses on the effect of errors in character-based text entry techniques. The effect of errors is targeted from theoretical, behavioral, and practical standpoints. This document starts with a review of the existing literature. It then presents results of a user study that investigated the effect of different error correction conditions on popular text entry performance metrics. Results showed that the way errors are handled has a significant effect on all frequently used error metrics. The outcomes also provided an understanding of how users notice and correct errors. Building on this, the dissertation then presents a new high-level and method-agnostic model for predicting the cost of error correction with a given text entry technique. Unlike the existing models, it accounts for both human and system factors and is general enough to be used with most character-based techniques. A user study verified the model through measuring the effects of a faulty keyboard on text entry performance. Subsequently, the work then explores the potential user adaptation to a gesture recognizer’s misrecognitions in two user studies. Results revealed that users gradually adapt to misrecognition errors by replacing the erroneous gestures with alternative ones, if available. Also, users adapt to a frequently misrecognized gesture faster if it occurs more frequently than the other error-prone gestures. Finally, this work presents a new hybrid approach to simulate pressure detection on standard touchscreens. The new approach combines the existing touch-point- and time-based methods. Results of two user studies showed that it can simulate pressure detection more reliably for at least two pressure levels: regular (~1 N) and extra (~3 N). Then, a new pressure-based text entry technique is presented that does not require tapping outside the virtual keyboard to reject an incorrect or unwanted prediction. Instead, the technique requires users to apply extra pressure for the tap on the next target key. The performance of the new technique was compared with the conventional technique in a user study. Results showed that for inputting short English phrases with 10% non-dictionary words, the new technique increases entry speed by 9% and decreases error rates by 25%. Also, most users (83%) favor the new technique over the conventional one. Together, the research presented in this dissertation gives more insight into on how errors affect text entry and also presents improved text entry methods

    Two one-handed tilting-based writing techniques on a smartphone

    Get PDF
    Text entry is a vital part of operating a mobile device, and is often done using a virtual keyboard such as QWERTY. Text entry using the virtual keyboard often faces difficulties, as the size of a single button is small and intangible, which can lead to high error rates and low text entry speed. This thesis reports a user experiment of two novel tilting-based text entry techniques with and without button press for key selection. The experiment focused on two main issues: 1) the performance of the tilting-based methods in comparison to the current commonly used reference method, the virtual QWERTY keyboard; and 2) evaluation of subjective satisfaction of the novel methods. The experiment was conducted using TEMA software running on an Android smartphone with a relativity small screen size. All writing was done with one hand only. The participants were able to comprehend and learn to use the new methods without any major problems. The development of text entry skill with the new methods was clear, as the mean text entry rates improved by 63-80 percent. The reference method QWERTY remained fastest of the three throughout the experiment. The tilting-based technique with key press for selection had the lowest total error rate at the end of the experiment, closely followed by QWERTY. Interview and questionnaire results showed that in some cases the tilting-based method was the preferred method of the three. Many of the shortcomings of tilt-based methods found during the experiment can be addressed in further development, and these methods are likely to prove competitive on devices with very small displays. Tilting has a potential as part of other interaction techniques besides text entry, and could be used to increase bandwidth between the device and the user without significantly increasing the cognitive load

    Tongues Tide: Translingual Directions for Technologically-Mediated Composing Platforms

    Get PDF
    This dissertation examines the link between classroom practices, language policies, and writing technologies in a translingual framework. Specifically, in the context of higher education, I explore the ways in which English-only policies dominate the academy and discourage linguistic diversity and inclusivity. This monolingual approach is emulated by composing software like MS Word and Google Docs, which surveil and constrain the languages and discourses available to student writers. These programs take a Current-Traditionalist approach to writing that is characterized by preoccupation with error and the positioning of the teacher as disciplinarian. In doing so, they inhibit translingual teaching and learning. Drawing upon the results of my ethnographic study on the composing processes of students in ENGL 109: Introduction to Academic Writing (a course taught at the University of Waterloo), I offer suggestions for improving the design of these technologically-mediated composing platforms to better accommodate translingual users

    A User-centered Design of Patient Safety Event Reporting Systems

    Get PDF

    WearPut : Designing Dexterous Wearable Input based on the Characteristics of Human Finger Motions

    Get PDF
    Department of Biomedical Engineering (Human Factors Engineering)Powerful microchips for computing and networking allow a wide range of wearable devices to be miniaturized with high fidelity and availability. In particular, the commercially successful smartwatches placed on the wrist drive market growth by sharing the role of smartphones and health management. The emerging Head Mounted Displays (HMDs) for Augmented Reality (AR) and Virtual Reality (VR) also impact various application areas in video games, education, simulation, and productivity tools. However, these powerful wearables have challenges in interaction with the inevitably limited space for input and output due to the specialized form factors for fitting the body parts. To complement the constrained interaction experience, many wearable devices still rely on other large form factor devices (e.g., smartphones or hand-held controllers). Despite their usefulness, the additional devices for interaction can constrain the viability of wearable devices in many usage scenarios by tethering users' hands to the physical devices. This thesis argues that developing novel Human-Computer interaction techniques for the specialized wearable form factors is vital for wearables to be reliable standalone products. This thesis seeks to address the issue of constrained interaction experience with novel interaction techniques by exploring finger motions during input for the specialized form factors of wearable devices. The several characteristics of the finger input motions are promising to enable increases in the expressiveness of input on the physically limited input space of wearable devices. First, the input techniques with fingers are prevalent on many large form factor devices (e.g., touchscreen or physical keyboard) due to fast and accurate performance and high familiarity. Second, many commercial wearable products provide built-in sensors (e.g., touchscreen or hand tracking system) to detect finger motions. This enables the implementation of novel interaction systems without any additional sensors or devices. Third, the specialized form factors of wearable devices can create unique input contexts while the fingers approach their locations, shapes, and components. Finally, the dexterity of fingers with a distinctive appearance, high degrees of freedom, and high sensitivity of joint angle perception have the potential to widen the range of input available with various movement features on the surface and in the air. Accordingly, the general claim of this thesis is that understanding how users move their fingers during input will enable increases in the expressiveness of the interaction techniques we can create for resource-limited wearable devices. This thesis demonstrates the general claim by providing evidence in various wearable scenarios with smartwatches and HMDs. First, this thesis explored the comfort range of static and dynamic touch input with angles on the touchscreen of smartwatches. The results showed the specific comfort ranges on variations in fingers, finger regions, and poses due to the unique input context that the touching hand approaches a small and fixed touchscreen with a limited range of angles. Then, finger region-aware systems that recognize the flat and side of the finger were constructed based on the contact areas on the touchscreen to enhance the expressiveness of angle-based touch input. In the second scenario, this thesis revealed distinctive touch profiles of different fingers caused by the unique input context for the touchscreen of smartwatches. The results led to the implementation of finger identification systems for distinguishing two or three fingers. Two virtual keyboards with 12 and 16 keys showed the feasibility of touch-based finger identification that enables increases in the expressiveness of touch input techniques. In addition, this thesis supports the general claim with a range of wearable scenarios by exploring the finger input motions in the air. In the third scenario, this thesis investigated the motions of in-air finger stroking during unconstrained in-air typing for HMDs. The results of the observation study revealed details of in-air finger motions during fast sequential input, such as strategies, kinematics, correlated movements, inter-fingerstroke relationship, and individual in-air keys. The in-depth analysis led to a practical guideline for developing robust in-air typing systems with finger stroking. Lastly, this thesis examined the viable locations of in-air thumb touch input to the virtual targets above the palm. It was confirmed that fast and accurate sequential thumb touch can be achieved at a total of 8 key locations with the built-in hand tracking system in a commercial HMD. Final typing studies with a novel in-air thumb typing system verified increases in the expressiveness of virtual target selection on HMDs. This thesis argues that the objective and subjective results and novel interaction techniques in various wearable scenarios support the general claim that understanding how users move their fingers during input will enable increases in the expressiveness of the interaction techniques we can create for resource-limited wearable devices. Finally, this thesis concludes with thesis contributions, design considerations, and the scope of future research works, for future researchers and developers to implement robust finger-based interaction systems on various types of wearable devices.ope

    Technology and Testing

    Get PDF
    From early answer sheets filled in with number 2 pencils, to tests administered by mainframe computers, to assessments wholly constructed by computers, it is clear that technology is changing the field of educational and psychological measurement. The numerous and rapid advances have immediate impact on test creators, assessment professionals, and those who implement and analyze assessments. This comprehensive new volume brings together leading experts on the issues posed by technological applications in testing, with chapters on game-based assessment, testing with simulations, video assessment, computerized test development, large-scale test delivery, model choice, validity, and error issues. Including an overview of existing literature and ground-breaking research, each chapter considers the technological, practical, and ethical considerations of this rapidly-changing area. Ideal for researchers and professionals in testing and assessment, Technology and Testing provides a critical and in-depth look at one of the most pressing topics in educational testing today

    Effects of Interpretation Error on User Learning in Novel Input Mechanisms

    Get PDF
    Novel input mechanisms generate signals that are interpreted as commands in computer systems. Sometimes noise from various sources can cause the system to produce errors when attempting to interpret the signal, causing a misrepresentation of the user's intention. While research has been done in understanding how these interpretation errors affect the performance of users of novel signal-based input mechanisms, such as a brain-computer interface (BCI), there is a lack of knowledge in how user learning is affected. Previous literature in command-based selection tasks has suggested that errors will have a negative impact on expertise development; however, the presence of errors could conversely improve a user's learning by demanding more attention from the user. This thesis begins by studying people's ability to use a novel input mechanism with a noisy input signal: a motor imagery BCI. By converting a user's brain signals into computer commands, a user could complete selection tasks using imagined movement. However, the high degree of interpretation errors caused by noise in the input signals made it difficult to differentiate the user's intent from the noise. As such, the results of the BCI study served as motivation to test the effects of interpretation errors on user learning. Two studies were conducted to determine how user performance and learning were affected by different rates of interpretation error in a novel input mechanism. The results from these two studies showed that interpretation errors led to slower task completion times, lower accuracy in memory recall, greater rates of user errors, and increased frustration. This new knowledge about the effects of interpretation errors can contribute to better design of input mechanisms and training programs for novel input systems
    corecore