11 research outputs found

    Calibration-free Text Entry Using Smooth Pursuit Eye Movements

    Get PDF
    In this paper, we propose a calibration-free gaze-based text entry system that uses smooth pursuit eye movements. We report on our implementation, which improves over prior work on smooth pursuit text entry by 1) eliminating the need of calibration using motion correlation, 2) increasing input rate from 3.34 to 3.41 words per minute, 3) featuring text suggestions that were trained on 10,000 lexicon sentences recommended in the literature. We report on a user study (N=26) which shows that users are able to eye type at 3.41 words per minutes without calibration and without user training. Qualitative feedback also indicates that users positively perceive the system. Our work is of particular benefit for disabled users and for situations when voice and tactile input are not feasible (e.g., in noisy environments or when the hands are occupied)

    Enhancing an Eye-Tracker based Human-Computer Interface with Multi-modal Accessibility Applied for Text Entry

    Get PDF
    In natural course, human beings usually make use of multi-sensory modalities for effective communication or efficiently executing day-to-day tasks. For instance, during verbal conversations we make use of voice, eyes, and various body gestures. Also effective human-computer interaction involves hands, eyes, and voice, if available. Therefore by combining multi-sensory modalities, we can make the whole process more natural and ensure enhanced performance even for the disabled users. Towards this end, we have developed a multi-modal human-computer interface (HCI) by combining an eye-tracker with a soft-switch which may be considered as typically representing another modality. This multi-modal HCI is applied for text entry using a virtual keyboard appropriately designed in-house, facilitating enhanced performance. Our experimental results demonstrate that using multi-modalities for text entry through the virtual keyboard is more efficient and less strenuous than single modality system and also solves the Midas-touch problem, which is inherent in an eye-tracker based HCI system where only dwell time is used for selecting a character

    Filteryedping: a dwell-free eye typing technique

    Get PDF
    The ability to type using eye gaze only is extremely important for individuals with a severe motor disability. To eye type, the user currently must sequentially gaze at letters in a virtual keyboard and dwell on each desired letter for a specific amount of time to input that key. Dwell-based eye typing has two possible drawbacks: unwanted input if the dwell threshold is too short or slow typing rates if the threshold is long. We demonstrate an eye typing technique, which does not require the user to dwell on the letters that she wants to input. Our method automatically filters out unwanted letters from the sequence of letters gazed at while typing a word. It ranks candidate words based on their length and frequency and presents them to the user for confirmation. Spell correction and support for typing words not in the corpus are also included.São Paulo Research Foundation (FAPESP) (grant #2012/01510-0)CAPESCNP

    Dwell-free input methods for people with motor impairments

    Full text link
    Millions of individuals affected by disorders or injuries that cause severe motor impairments have difficulty performing compound manipulations using traditional input devices. This thesis first explores how effective various assistive technologies are for people with motor impairments. The following questions are studied: (1) What activities are performed? (2) What tools are used to support these activities? (3) What are the advantages and limitations of these tools? (4) How do users learn about and choose assistive technologies? (5) Why do users adopt or abandon certain tools? A qualitative study of fifteen people with motor impairments indicates that users have strong needs for efficient text entry and communication tools that are not met by existing technologies. To address these needs, this thesis proposes three dwell-free input methods, designed to improve the efficacy of target selection and text entry based on eye-tracking and head-tracking systems. They yield: (1) the Target Reverse Crossing selection mechanism, (2) the EyeSwipe eye-typing interface, and (3) the HGaze Typing interface. With Target Reverse Crossing, a user moves the cursor into a target and reverses over a goal to select it. This mechanism is significantly more efficient than dwell-time selection. Target Reverse Crossing is then adapted in EyeSwipe to delineate the start and end of a word that is eye-typed with a gaze path connecting the intermediate characters (as with traditional gesture typing). When compared with a dwell-based virtual keyboard, EyeSwipe affords higher text entry rates and a more comfortable interaction. Finally, HGaze Typing adds head gestures to gaze-path-based text entry to enable simple and explicit command activations. Results from a user study demonstrate that HGaze Typing has better performance and user satisfaction than a dwell-time method

    Keyboard layout in eye gaze communication access: typical vs. ALS

    Get PDF
    The purpose of the current investigation was to determine which of three keyboard layouts is the most efficient for typical as well as neurologically-compromised first-time users of eye gaze access. All participants (16 neurotypical, 16 amyotrophic lateral sclerosis; ALS) demonstrated hearing and reading abilities sufficient to interact with all stimuli. Participants from each group answered questions about technology use and vision status. Participants with ALS also noted date of first disease-related symptoms, initial symptoms, and date of diagnosis. Once a speech generating device (SGD) with eye gaze access capabilities was calibrated to an individual participant's eyes, s/he practiced utilizing the access method. Then all participants spelled word, phrases, and a longer phrase on each of three keyboard layouts (i.e., standard QWERTY, alphabetic with highlighted vowels, frequency of occurrence). Accuracy of response, error rate, and eye typing time were determined for each participant for all layouts.  Results indicated that both groups shared equivalent experience with technology. Additionally, neurotypical adults typed more accurately than the ALS group on all keyboards. The ALS group made more errors in eye typing than the neurotypical participants, but accuracy and disease status were independent of one another. Although the neurotypical group had a higher efficiency ratio (i.e. accurate keystrokes to total active task time) for the frequency layout, there were no such differences noted for the QWERTY or alphabetic keyboards. No differences were observed between the groups for either typing rate or preference ratings on any keyboard, though most participants preferred the standard QWERTY layout. No relationships were identified between preference order of the three keyboards and efficiency scores or the quantitative variables (i.e., rate, accuracy, error scores). There was no relationship between time since ALS diagnosis and preference ratings for each of the three keyboard layouts.   It appears that individuals with spinal-onset ALS perform similarly to their neurotypical peers with respect to first-time use of eye gaze access for typing words and phrases on three different keyboard layouts. Ramifications of the results as well as future directions for research are discussed.  Ph.D

    Optimizing Human Performance in Mobile Text Entry

    Get PDF
    Although text entry on mobile phones is abundant, research strives to achieve desktop typing performance "on the go". But how can researchers evaluate new and existing mobile text entry techniques? How can they ensure that evaluations are conducted in a consistent manner that facilitates comparison? What forms of input are possible on a mobile device? Do the audio and haptic feedback options with most touchscreen keyboards affect performance? What influences users' preference for one feedback or another? Can rearranging the characters and keys of a keyboard improve performance? This dissertation answers these questions and more. The developed TEMA software allows researchers to evaluate mobile text entry methods in an easy, detailed, and consistent manner. Many in academia and industry have adopted it. TEMA was used to evaluate a typical QWERTY keyboard with multiple options for audio and haptic feedback. Though feedback did not have a significant effect on performance, a survey revealed that users' choice of feedback is influenced by social and technical factors. Another study using TEMA showed that novice users entered text faster using a tapping technique than with a gesture or handwriting technique. This motivated rearranging the keys and characters to create a new keyboard, MIME, that would provide better performance for expert users. Data on character frequency and key selection times were gathered and used to design MIME. A longitudinal user study using TEMA revealed an entry speed of 17 wpm and a total error rate of 1.7% for MIME, compared to 23 wpm and 5.2% for QWERTY. Although MIME's entry speed did not surpass QWERTY's during the study, it is projected to do so after twelve hours of practice. MIME's error rate was consistently low and significantly lower than QWERTY's. In addition, participants found MIME more comfortable to use, with some reporting hand soreness after using QWERTY for extended periods

    Selection strategies in gaze interaction

    Get PDF
    This thesis deals with selection strategies in gaze interaction, specifically for a context where gaze is the sole input modality for users with severe motor impairments. The goal has been to contribute to the subfield of assistive technology where gaze interaction is necessary for the user to achieve autonomous communication and environmental control. From a theoretical point of view research has been done on the physiology of the gaze, eye tracking technology, and a taxonomy of existing selection strategies has been developed. Empirically two overall approaches have been taken. Firstly, end-user research has been conducted through interviews and observation. The capabilities, requirements, and wants of the end-user have been explored. Secondly, several applications have been developed to explore the selection strategy of single stroke gaze gestures (SSGG) and aspects of complex gaze gestures. The main finding is that single stroke gaze gestures can successfully be used as a selection strategy. Some of the features of SSGG are: That horizontal single stroke gaze gestures are faster than vertical single stroke gaze gestures; That there is a significant difference in completion time depending on gesture length; That single stroke gaze gestures can be completed without visual feedback; That gaze tracking equipment has a significant effect on the completion times and error rates of single stroke gaze gestures; That there is not a significantly greater chance of making selection errors with single stroke gaze gestures compared with dwell selection. The overall conclusion is that the future of gaze interaction should focus on developing multi-modal interactions for mono-modal input
    corecore