28 research outputs found

    Adaptive Text Entry for Mobile Devices

    Get PDF

    Beginners Performance with MessagEase and QWERTY

    Get PDF
    With the increased use of mobile phones, interest in text entry with them has also amplified. Many new mobile phones are equipped with a QWERTY keypad; new methods to surpass the QWERTY performance are also being developed. This thesis compares user performance of virtual QWERTY keypad to MessagEase. MessagEase uses 9 keys and can therefore be used even on very small touch displays. 9 characters are entered with tapping and the rest with a tap-and-slide gesture. An experiment was conducted with 10 participants transcribing text with both text entry techniques. The experiment consisted of three sessions. In each session, the participants transcribed 30 phrases in total - 15 phrases using each text entry technique. Responses to the System Usability Scale (SUS) for each text entry technique and informal interview data were also collected. From a Repeated-measures analysis of variance a significant effect of the text entry method on text entry rate was seen (F1,19= 47.140, p < 0.0001). The effect of the session (i.e. learning) was also statistically significant (F2,18= 3.631, p = 0.047).The interaction of the session and method was also statistically significant (F2,18= 10.286, p = 0.001) indicating different learning rates. Average text entry speed with MessagEase was 7.43 words per minute (wpm) in the first session and 10.96 wpm in the third session. Likewise, text entry speed with the QWERTY soft keyboard was 17.75 wpm in the first session and 17.16 wpm in the third session. No significant difference was found in the error rates. Keywords: text entry method, MessagEase, QWERT

    DoubleType: A wearable double bracelet concept for text entry

    Get PDF
    Wearable devices are used for text entry on a daily basis. Nowadays, people use their fingers to type text on touchscreens. Unfortunately, the screen size is too small to be able to type text for a longer period of time comfortably compared to quick tasks, such as checking social media posts or email. I present DoubleType, a wearable solution where two bracelets are used together to type text. When used together, the combined display area offers the user more screen estate for a larger software keyboard with larger keys to type and more area for the text being edited to look at. Three concepts were created and a paper prototype for each concept was produced. A video prototype was created to illustrate how the user interacts with the bracelets when entering text to the system. An online questionnaire was published and it contained images of the paper prototypes and a link to a video of the prototypes in use. 34 volunteers participated. Five background questions were asked and then five questions about the prototypes. In general, participants did not see DoubleType as a comfortable system to use for typing text. Also, majority of participants did not think DoubleType will help avoid getting neck and shoulder pains from typing text. And, most participants would not use DoubleType to type in a standing position for some parts of one's days to avoid sitting long periods of time. Of the three concepts, participants favored the most concept C, where the concept is put on a table. From the open-ended questions it was revealed participants disliked the size of the bracelets. There could be use of the prototype in a factory for technicians who need to make notes of the procedures they have done. Future research with working prototypes is needed to find out how ergonomic and efficient DoubleType is for text entry

    A Thumb Stroke-Based Virtual Keyboard for Sight-Free Text Entry on Touch-Screen Mobile Phones

    Get PDF
    The use of QWERTY on most of the current mobile devices for text entry usually requires users’ full visual attention and both hands, which is not always possible due to situational or physical impairments of users. Prior research has shown that users prefer to hold and interact with a mobile device with a single hand when possible, which is challenging and poorly supported by current mobile devices. We propose a novel thumb-stroke based keyboard called ThumbStroke, which can support both sight-free and one-handed text entry on touch-screen mobile devices. Selecting a character for text entry via ThumbStroke completely relies on the directions of thumb movements at anywhere on a device screen. We evaluated ThumbStroke through a longitudinal lab experiment including 20 sessions with 13 participants. ThumbStroke shows advantages in typing accuracy and user perceptions in comparison to Escape and QWERTY and results in faster typing speed than QWERTY for sight-free text entry

    Predicting and Reducing the Impact of Errors in Character-Based Text Entry

    Get PDF
    This dissertation focuses on the effect of errors in character-based text entry techniques. The effect of errors is targeted from theoretical, behavioral, and practical standpoints. This document starts with a review of the existing literature. It then presents results of a user study that investigated the effect of different error correction conditions on popular text entry performance metrics. Results showed that the way errors are handled has a significant effect on all frequently used error metrics. The outcomes also provided an understanding of how users notice and correct errors. Building on this, the dissertation then presents a new high-level and method-agnostic model for predicting the cost of error correction with a given text entry technique. Unlike the existing models, it accounts for both human and system factors and is general enough to be used with most character-based techniques. A user study verified the model through measuring the effects of a faulty keyboard on text entry performance. Subsequently, the work then explores the potential user adaptation to a gesture recognizer’s misrecognitions in two user studies. Results revealed that users gradually adapt to misrecognition errors by replacing the erroneous gestures with alternative ones, if available. Also, users adapt to a frequently misrecognized gesture faster if it occurs more frequently than the other error-prone gestures. Finally, this work presents a new hybrid approach to simulate pressure detection on standard touchscreens. The new approach combines the existing touch-point- and time-based methods. Results of two user studies showed that it can simulate pressure detection more reliably for at least two pressure levels: regular (~1 N) and extra (~3 N). Then, a new pressure-based text entry technique is presented that does not require tapping outside the virtual keyboard to reject an incorrect or unwanted prediction. Instead, the technique requires users to apply extra pressure for the tap on the next target key. The performance of the new technique was compared with the conventional technique in a user study. Results showed that for inputting short English phrases with 10% non-dictionary words, the new technique increases entry speed by 9% and decreases error rates by 25%. Also, most users (83%) favor the new technique over the conventional one. Together, the research presented in this dissertation gives more insight into on how errors affect text entry and also presents improved text entry methods

    Wearable computing and contextual awareness

    Get PDF
    Thesis (Ph.D.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 1999.Includes bibliographical references (leaves 231-248).Computer hardware continues to shrink in size and increase in capability. This trend has allowed the prevailing concept of a computer to evolve from the mainframe to the minicomputer to the desktop. Just as the physical hardware changes, so does the use of the technology, tending towards more interactive and personal systems. Currently, another physical change is underway, placing computational power on the user's body. These wearable machines encourage new applications that were formerly infeasible and, correspondingly, will result in new usage patterns. This thesis suggests that the fundamental improvement offered by wearable computing is an increased sense of user context. I hypothesize that on-body systems can sense the user's context with little or no assistance from environmental infrastructure. These body-centered systems that "see" as the user sees and "hear" as the user hears, provide a unique "first-person" viewpoint of the user's environment. By exploiting models recovered by these systems, interfaces are created which require minimal directed action or attention by the user. In addition, more traditional applications are augmented by the contextual information recovered by these systems. To investigate these issues, I provide perceptually sensible tools for recovering and modeling user context in a mobile, everyday environment. These tools include a downward-facing, camera-based system for establishing the location of the user; a tag-based object recognition system for augmented reality; and several on-body gesture recognition systems to identify various user tasks in constrained environments. To address the practicality of contextually-aware wearable computers, issues of power recovery, heat dissipation, and weight distribution are examined. In addition, I have encouraged a community of wearable computer users at the Media Lab through design, management, and support of hardware and software infrastructure. This unique community provides a heightened awareness of the use and social issues of wearable computing. As much as possible, the lessons from this experience will be conveyed in the thesis.by Thad Eugene Starner.Ph.D

    Making Spatial Information Accessible on Touchscreens for Users who are Blind and Visually Impaired

    Get PDF
    Touchscreens have become a de facto standard of input for mobile devices as they most optimally use the limited input and output space that is imposed by their form factor. In recent years, people who are blind and visually impaired have been increasing their usage of smartphones and touchscreens. Although basic access is available, there are still many accessibility issues left to deal with in order to bring full inclusion to this population. One of the important challenges lies in accessing and creating of spatial information on touchscreens. The work presented here provides three new techniques, using three different modalities, for accessing spatial information on touchscreens. The first system makes geometry and diagram creation accessible on a touchscreen through the use of text-to-speech and gestural input. This first study is informed by a qualitative study of how people who are blind and visually impaired currently access and create graphs and diagrams. The second system makes directions through maps accessible using multiple vibration sensors without any sound or visual output. The third system investigates the use of binaural sound on a touchscreen to make various types of applications accessible such as physics simulations, astronomy, and video games

    Chinese Text Entry with Mobile Devices

    Get PDF
    Tietokoneiden ja nykyaikaisten matkapuhelimien käytön kannalta on olennaista, että niihin voidaan syöttää tekstiä tehokkaasti. Kiinan kielen eri murteita puhuu äidinkielenään noin viidesosa maailman väestöstä eli yli miljardi ihmistä. Kiinan kielen merkki- ja tavuperustaisuus tekee siitä tekstinsyötön kannalta ainutlaatuisen haastavan. Monet kiinalaisista merkeistä ovat rakenteeltaan monimutkaisia ja homofonisia (ääntyvät samalla tavoin) joidenkin muiden merkkien kanssa. Syötettäessä tekstiä näppäimistöltä tavallinen tapa on käyttää ns. pinyin-koodeja, joiden avulla kukin kiinan merkki voidaan esittää useasta latinalaisen aakkoston merkistä koostuvana koodina. Homofoniasta johtuen tarkoitettu kiinan kielen merkki joudutaan tämän jälkeen vielä valitsemaan usean vaihtoehdon joukosta, mikä tekee tekstinsyöttöprosessista vaikeampaa kuin romaanisten kielten tapauksessa. Lisäksi on otettava huomioon Kiinan eri osissa puhutut useat murteet. Kaikki nämä tekijät yhdessä tekevät kiinankielisen tekstin syötöstä tietokoneille haastavaa. Tämän väitöskirjan tavoitteena on parantaa kiinankielisen tekstin syöttötapojen käyttäjäkokemusta käytettäessä matkapuhelimia ja muita mobiililaitteita. Väitöskirjassa tutkitaan empiiristen kokeiden ja mallinnuksen avulla uusia tekstinsyöttötapoja ja niiden käyttöä. Tutkimuksen kohteena on neljä erilaista tekstinsyöttötapaa: kiinankielen käsinkirjoituksen tunnistus, pyörivän kiekon avulla tapahtuva tekstinsyöttö, mandariinikiinaan perustuva sanelu, ja numeronäppäinten avulla tapahtuva pinyin-koodien syöttö. Työssä ehdotetaan uusia tekniikoita sekä käsinkirjoituksen tunnistukseen että kiekkoa käyttävään pinyin-koodien syöttöön. Empiirisissä kokeissa osoittautui että käyttäjät pitivät uusista tekniikoista. Mandariinikiinalle on suunniteltu lyhytviestien sanelusovellus, josta on tehty kaksi käyttäjäkoetta. Myös numeronäppäinten avulla tapahtuvaa pinyin-koodien syöttöä on tutkittu kahdessa kokeessa. Ensimmäisessä kokeessa vertailtiin viittä eri menetelmää. Se tuotti suunnitteluohjeita etenkin koskien fraasien (useamman merkin kokonaisuuksien) syöttöä, tekniikkaa joka voi nopeuttaa tekstinsyöttöä. Toisen osatutkimuksen tuloksena on tekstinsyöttöä kuvaava malli, jonka avulla voidaan ennustaa menetelmän nopeutta kun syötettäessä ei tehdä virheitä. Tutkimus johti myös useisiin jatkotutkimuskysymyksiin. On tarpeen kehittää tehokkaampia menetelmiä tilanteeseen, jossa merkki joudutaan valitsemaan useista vaihtoehdoista. Kehityspotentiaalia on myös merkkien perustana olevien viivojen tunnistustavoissa sekä kosketusnäytöllä esitettyjen näppäimistöjen paremmassa hyödyntämisessä.For using computers and modern mobile phones it is essential that there are efficient methods for providing textual input. About one fifth of the world´s population, or over one billion people, speaks some variety of Chinese as their native language. Chinese has unique characteristics as a logosyllabic language. For example, many Chinese characters are complex in structure and normally homophonic with some others. With keyboards and other key-based input devices the normal approach is to use so-called pinyin input, where the Chinese characters are entered using their pinyin mark that consists of several characters in the Roman alphabet. Because of homophony this technique requires choosing the correct Chinese character from a list of posssible choices, making the input process more complicated than in Roman languages. Moreover, the many varieties of the language in different parts of China have to be taken into account as well. All above factors bring new challenges to the design and evaluation of Chinese text entry methods in computing systems. The overall objective of this dissertation is to improve user experience of Chinese text entry on mobile devices. To achieve the goal, the author explores new interaction solutions and patterns of user behavior in the Chinese text entry process with various approaches including empirical studies and performance modeling. The work covers four means of Chinese text entry on mobile devices: Chinese handwriting recognition, Chinese indirect text entry with a rotator, Mandarin dictation, and Chinese pinyin input methods with a 12-key keypad. New design solutions for Chinese handwriting recognition and pinyin methods utilizing a rotator are proposed and proved being well accepted by users with empirical studies. A Mandarin short message dictation application for mobile phones is also presented , with two associated studies on human factors. Two studies were also carried out on Chinese pinyin input methods that are based on the 12-key keypad. The comparative study of five phrasal pinyin input methods led to design guidelines for the advanced feature of phrasal input. The second study of pinyin input methods produced a predictive model addressing users´ error-free speeds. Based on the conclusions from studies in this thesis, several additional research questions were identified for the future. For example, improvements are necessary to promote user performance on target selection process in Chinese text entry on mobile devices. Moreover, design and studies on stroke methods and Chinese specific soft keyboards are also required

    Text Entry Performance and Situation Awareness of a Joint Optical See-Through Head-Mounted Display and Smartphone System

    Full text link
    Optical see-through head-mounted displays (OST HMDs) are a popular output medium for mobile Augmented Reality (AR) applications. To date, they lack efficient text entry techniques. Smartphones are a major text entry medium in mobile contexts but attentional demands can contribute to accidents while typing on the go. Mobile multi-display ecologies, such as combined OST HMD-smartphone systems, promise performance and situation awareness benefits over single-device use. We study the joint performance of text entry on mobile phones with text output on optical see-through head-mounted displays. A series of five experiments with a total of 86 participants indicate that, as of today, the challenges in such a joint interactive system outweigh the potential benefits.Comment: To appear in IEEE Transactions on Visualization and Computer Graphics On page(s): 1-17 Print ISSN: 1077-2626 Online ISSN: 1077-262
    corecore