7,585 research outputs found

    Analyzing the Impact of Cognitive Load in Evaluating Gaze-based Typing

    Full text link
    Gaze-based virtual keyboards provide an effective interface for text entry by eye movements. The efficiency and usability of these keyboards have traditionally been evaluated with conventional text entry performance measures such as words per minute, keystrokes per character, backspace usage, etc. However, in comparison to the traditional text entry approaches, gaze-based typing involves natural eye movements that are highly correlated with human brain cognition. Employing eye gaze as an input could lead to excessive mental demand, and in this work we argue the need to include cognitive load as an eye typing evaluation measure. We evaluate three variations of gaze-based virtual keyboards, which implement variable designs in terms of word suggestion positioning. The conventional text entry metrics indicate no significant difference in the performance of the different keyboard designs. However, STFT (Short-time Fourier Transform) based analysis of EEG signals indicate variances in the mental workload of participants while interacting with these designs. Moreover, the EEG analysis provides insights into the user's cognition variation for different typing phases and intervals, which should be considered in order to improve eye typing usability.Comment: 6 pages, 4 figures, IEEE CBMS 201

    Ambiguous keyboards for AAC

    Get PDF
    Purpose – “Ambiguous keyboards” and “disambiguation processes” are becoming universally recognised through the popularisation of “predictive text messaging” on mobile phones. As this paper shows, although originating in the AT and AAC fields, these terms and techniques no longer appear to be widely understood or adopted by practitioners or users. The purpose of this paper is to introduce these techniques, discussing the research and theory around them, and to suggest them as AT and AAC strategies to be considered by practitioners and users. Design/methodology/approach – This is a conceptual paper that describes the use of ambiguous keyboards and disambiguation. The hypothesis of the paper is that ambiguous keyboards and disambiguation processes offer potential to increase the efficiency and effectiveness of AAC and should thus be considered further in research and practice. Findings – The two broad methods for removing the ambiguity from the output of an ambiguous keyboard are presented. A summary of the literature around the use of disambiguation processes provided and the use of disambiguation processes for AAC discussed. Originality/value – This paper suggests that ambiguity should be adopted as a characteristic of an AAC keyboard as should the method of removing ambiguity – namely either coding or a disambiguation process

    Enhancing an Eye-Tracker based Human-Computer Interface with Multi-modal Accessibility Applied for Text Entry

    Get PDF
    In natural course, human beings usually make use of multi-sensory modalities for effective communication or efficiently executing day-to-day tasks. For instance, during verbal conversations we make use of voice, eyes, and various body gestures. Also effective human-computer interaction involves hands, eyes, and voice, if available. Therefore by combining multi-sensory modalities, we can make the whole process more natural and ensure enhanced performance even for the disabled users. Towards this end, we have developed a multi-modal human-computer interface (HCI) by combining an eye-tracker with a soft-switch which may be considered as typically representing another modality. This multi-modal HCI is applied for text entry using a virtual keyboard appropriately designed in-house, facilitating enhanced performance. Our experimental results demonstrate that using multi-modalities for text entry through the virtual keyboard is more efficient and less strenuous than single modality system and also solves the Midas-touch problem, which is inherent in an eye-tracker based HCI system where only dwell time is used for selecting a character

    Fast and precise touch-based text entry for head-mounted augmented reality with variable occlusion

    Get PDF
    We present the VISAR keyboard: An augmented reality (AR) head-mounted display (HMD) system that supports text entry via a virtualised input surface. Users select keys on the virtual keyboard by imitating the process of single-hand typing on a physical touchscreen display. Our system uses a statistical decoder to infer users’ intended text and to provide error-tolerant predictions. There is also a high-precision fall-back mechanism to support users in indicating which keys should be unmodified by the auto-correction process. A unique advantage of leveraging the well-established touch input paradigm is that our system enables text entry with minimal visual clutter on the see-through display, thus preserving the user’s field-of-view. We iteratively designed and evaluated our system and show that the final iteration of the system supports a mean entry rate of 17.75wpm with a mean character error rate less than 1%. This performance represents a 19.6% improvement relative to the state-of-the-art baseline investigated: A gaze-then-gesture text entry technique derived from the system keyboard on the Microsoft HoloLens. Finally, we validate that the system is effective in supporting text entry in a fully mobile usage scenario likely to be encountered in industrial applications of AR HMDs.Per Ola Kristensson was supported in part by a Google Faculty research award and EPSRC grants EP/N010558/1 and EP/N014278/1. Keith Vertanen was supported in part by a Google Faculty research award. John Dudley was supported by the Trimble Fund

    Evaluation of head-free eye tracking as an input device for air traffic control

    Get PDF
    International audienceThe purpose of this study was to investigate the possibility to integrate a free head motion eye-tracking system as input device in air traffic control (ATC) activity. Sixteen participants used an eye tracker to select targets displayed on a screen as quickly and accurately as possible. We assessed the impact of the presence of visual feedback about gaze position and the method of target selection on selection performance under different difficulty levels induced by variations in target size and target-to-target separation. We tend to consider that the combined use of gaze dwell-time selection and continuous eye-gaze feedback was the best condition as it suits naturally with gaze displacement over the ATC display and free the hands of the controller, despite a small cost in terms of selection speed. In addition, target size had a greater impact on accuracy and selection time than target distance. These findings provide guidelines on possible further implementation of eye tracking in ATC everyday activity
    • …
    corecore