4,978 research outputs found

    Analyzing the Impact of Cognitive Load in Evaluating Gaze-based Typing

    Full text link
    Gaze-based virtual keyboards provide an effective interface for text entry by eye movements. The efficiency and usability of these keyboards have traditionally been evaluated with conventional text entry performance measures such as words per minute, keystrokes per character, backspace usage, etc. However, in comparison to the traditional text entry approaches, gaze-based typing involves natural eye movements that are highly correlated with human brain cognition. Employing eye gaze as an input could lead to excessive mental demand, and in this work we argue the need to include cognitive load as an eye typing evaluation measure. We evaluate three variations of gaze-based virtual keyboards, which implement variable designs in terms of word suggestion positioning. The conventional text entry metrics indicate no significant difference in the performance of the different keyboard designs. However, STFT (Short-time Fourier Transform) based analysis of EEG signals indicate variances in the mental workload of participants while interacting with these designs. Moreover, the EEG analysis provides insights into the user's cognition variation for different typing phases and intervals, which should be considered in order to improve eye typing usability.Comment: 6 pages, 4 figures, IEEE CBMS 201

    Smooth-pursuit performance during eye-typing from memory indicates mental fatigue

    Get PDF
    Mental fatigue is known to occur as a result of activities related to e.g. transportation, health-care and military operations. Gaze tracking has wide-ranging applications, with the technology becoming more compact and processing power reducing. Though numerous techniques have been applied to measure mental fatigue using gaze tracking, smooth-pursuit movement, a natural eye movement generated when following a moving object with gaze, has not been explored with relation to mental fatigue. In this paper, we report the results from a smooth-pursuit movement based eye-typing experiment with varying task difficulty to generate cognitive load, performed in the morning and afternoon by 36 participants. We have investigated the effects of time-on-task and time of day on mental fatigue using self-reported questionnaires and smooth-pursuit performance, extracted from the gaze data. The self-reported mental fatigue increased due to time-on-task, but the time of day did not have an effect. The results illustrate that smooth-pursuit movement performance declined with time-on-task, with increased error in the gaze position and an inability to match the speed of the moving object. The findings exhibit the feasibility of mental fatigue detection using smooth-pursuit movements during an eye-interactive task of eye-typing

    Keystroke dynamics as signal for shallow syntactic parsing

    Full text link
    Keystroke dynamics have been extensively used in psycholinguistic and writing research to gain insights into cognitive processing. But do keystroke logs contain actual signal that can be used to learn better natural language processing models? We postulate that keystroke dynamics contain information about syntactic structure that can inform shallow syntactic parsing. To test this hypothesis, we explore labels derived from keystroke logs as auxiliary task in a multi-task bidirectional Long Short-Term Memory (bi-LSTM). Our results show promising results on two shallow syntactic parsing tasks, chunking and CCG supertagging. Our model is simple, has the advantage that data can come from distinct sources, and produces models that are significantly better than models trained on the text annotations alone.Comment: In COLING 201

    Eye tracking and the translation process: reflections on the analysis and interpretation of eye-tracking data

    Get PDF
    Eye tracking has become increasingly popular as a quantitative research method in translation research. This paper discusses some of the major methodological issues involved in the use of eye tracking in translation research. It focuses specifically on challenges in the analysis and interpretation of eye-tracking data as reflections of cognitive processes during translation. Four types of methodological issues are discussed in the paper. The first part discusses the preparatory steps that precede the actual recording of eye-tracking data. The second part examines critically the general assumptions linking eye movements to cognitive processing in the context of translation research. The third part of the paper discusses two popular eye-tracking measures often used in translation research, fixations and pupil size, while the fourth part proposes a method to evaluate the quality of eye-tracking data.El seguimiento ocular es un método de investigación cuantitativa de creciente popularidad en la investigación de la traducción. Este artículo aborda algunos de los aspectos metodológicos más importantes relativos el uso del seguimiento ocular en la investigación de la traducción. Se centra específicamente en el análisis y la interpretación de los datos de seguimiento ocular como reflejo de los procesos cognitivos durante la traducción. El artículo aborda cuatro tipos de aspectos metodológicos. La primera parte considera los pasos preparatorios previos a la grabación de datos. La segunda parte examina críticamente las hipótesis que vinculan los movimientos oculares al procesamiento cognitivo en el contexto de la investigación de la traducción. En la tercera parte se analizan dos parámetros de seguimiento ocular de uso frecuente en la investigación de la traducción (fijaciones y el tamaño pupilar), mientras que la cuarta parte propone un método para evaluar la calidad de los datos de seguimiento de los ojos

    Tracking Eye Movements in Sight Translation – the comprehension process in interpreting

    Get PDF
    [[abstract]]While the three components of interpreting have been identified as comprehension, reformulation, and production, the process of how these components occur has remained relatively unexplored. The present study employed the eye-tracking method to investigate the process of sight translation, a mode of interpreting in which the input is written rather than oral. The research focused especially on the comprehension component in sight translation, addressed the validity of the horizontal and the vertical perspectives of interpreting, and ascertained whether reading ahead exists in sight translation. Eye movements of 18 interpreting students were recorded during silent reading of a Chinese speech, reading aloud a Chinese speech, and Chinese to English sight translation. Since silent reading consists of the comprehension component while reading aloud consists of the comprehension and production components, the two tasks served as a basis of comparison for investigating comprehension in sight translation. The findings suggested that sight translation and silent reading were no different in the initial stage of reading, as reflected by similar first fixation duration, single fixation duration, gaze duration, fixation probability, and refixation probability. Sight translation only began to demonstrate differences from silent reading after first-pass reading, as shown by higher rereading time and rereading rate. Also, reading ahead occurred in 72.8% of cases in this experiment, indicating the overlap between reading and oral production in Chinese to English sight translation. The results supported the vertical perspective in interpreting as well as the claim of reading ahead. Implications for interpreter training are to attach more importance to paraphrasing skills and to focus more on the similarities between sight translation and simultaneous interpreting.

    Using Variable Dwell Time to Accelerate Gaze-Based Web Browsing with Two-Step Selection

    Full text link
    In order to avoid the "Midas Touch" problem, gaze-based interfaces for selection often introduce a dwell time: a fixed amount of time the user must fixate upon an object before it is selected. Past interfaces have used a uniform dwell time across all objects. Here, we propose a gaze-based browser using a two-step selection policy with variable dwell time. In the first step, a command, e.g. "back" or "select", is chosen from a menu using a dwell time that is constant across the different commands. In the second step, if the "select" command is chosen, the user selects a hyperlink using a dwell time that varies between different hyperlinks. We assign shorter dwell times to more likely hyperlinks and longer dwell times to less likely hyperlinks. In order to infer the likelihood each hyperlink will be selected, we have developed a probabilistic model of natural gaze behavior while surfing the web. We have evaluated a number of heuristic and probabilistic methods for varying the dwell times using both simulation and experiment. Our results demonstrate that varying dwell time improves the user experience in comparison with fixed dwell time, resulting in fewer errors and increased speed. While all of the methods for varying dwell time resulted in improved performance, the probabilistic models yielded much greater gains than the simple heuristics. The best performing model reduces error rate by 50% compared to 100ms uniform dwell time while maintaining a similar response time. It reduces response time by 60% compared to 300ms uniform dwell time while maintaining a similar error rate.Comment: This is an Accepted Manuscript of an article published by Taylor & Francis in the International Journal of Human-Computer Interaction on 30 March, 2018, available online: http://www.tandfonline.com/10.1080/10447318.2018.1452351 . For an eprint of the final published article, please access: https://www.tandfonline.com/eprint/T9d4cNwwRUqXPPiZYm8Z/ful
    corecore