32 research outputs found

    Eye gaze interaction with expanding targets

    Get PDF
    Recent evidence on the performance benefits of expanding targets during manual pointing raises a provocative question: Can a similar effect be expected for eye gaze interaction? We present two experiments to examine the benefits of target expansion during an eye-controlled selection task. The second experiment also tested the efficiency of a “grab-and-hold algorithm ” to counteract inherent eye jitter. Results confirm the benefits of target expansion both in pointing speed and accuracy. Additionally, the grab-and-hold algorithm affords a dramatic 57 % reduction in error rates overall. The reduction is as much as 68 % for targets subtending 0.35 degrees of visual angle. However, there is a cost which surfaces as a slight increase in movement time (10%). These findings indicate that target expansion coupled with additional measures to accommodate eye jitter has the potential to make eye gaze a more suitable input modality

    The Effect of Different Text Presentation Formats on Eye Movement Metrics in Reading

    Get PDF
    Eye movement data were collected and analyzed from 16 participants while they read text from a computer screen. Several text presentation formats were compared, including sentences as part of a full paragraph, sentences presented one by one, sentences presented in chunks of at most 30 characters at a predefined rate, and line-by-line presentation fitting the width of the computer screen. The goal of the experiment was to study how these different text presentation modes affect eye movement metrics (fixation duration, fixations per minute, regressions, etc.). One-way repeated measures ANOVA revealed that differences in presentation format have a significant effect on fixation duration, number of fixations per minute, and number of regressions

    Visualizing the Reading Activity of People Learning to Read

    Get PDF
    Several popular visualizations of gaze data, such as scanpaths and heatmaps, can be used independently of the viewing task. For a specific task, such as reading, more informative visualizations can be created. We have developed several such techniques, some dynamic and some static, to communicate the reading activity of children to primary school teachers. The goal of the visualizations was to highlight the reading skills to a teacher with no background in the theory of eye movements or eye tracking technology. Evaluations of the techniques indicate that, as intended, they serve different purposes and were appreciated by the school teachers differently. Dynamic visualizations help to give the teachers a good understanding of how the individual students read. Static visualizations help in getting a simple overview of how the children read as a group and of their active vocabulary

    Haptic feedback in eye typing

    Get PDF
    Proper feedback is essential in gaze based interfaces, where the same modality is used for both perception and control. We measured how vibrotactile feedback, a form of haptic feedback, compares with the commonly used visual and auditory feedback in eye typing. Haptic feedback was found to produce results that are close to those of auditory feedback; both were easy to perceive and participants liked both the auditory ”click” and the tactile “tap” of the selected key. Implementation details (such as the placement of the haptic actuator) were also found important

    Mid-Air Gestural Interaction with a Large Fogscreen

    Get PDF
    Projected walk-through fogscreens have been created, but there is little research on the evaluation of the interaction performance with fogscreens. The present study investigated mid-air hand gestures for interaction with a large fogscreen. Participants (N = 20) selected objects from a fogscreen using tapping and dwell-based gestural techniques, with and without vibrotactile/haptic feedback. In terms of Fitts’ law, the throughput was about 1.4 bps to 2.6 bps, suggesting that gestural interaction with a large fogscreen is a suitable and effective input method. Our results also suggest that tapping without haptic feedback has good performance and potential for interaction with a fogscreen, and that tactile feedback is not necessary for effective mid-air interaction. These findings have implications for the design of gestural interfaces suitable for interaction with fogscreens.Peer reviewe

    Processing gaze responses to dynamic 2D stimuli

    No full text
    Huge amount of data collected in experiments on gaze behavior requires efficient tools for analysis and visualization of the data to facilitate its interpretation. Although there are a number of tools available today that support observations of static 2D stimuli, the handling of dynamic stimuli remains a major challenge. The traditional approach analyzes the data frame-by-frame, which is very tedious and time consuming. We propose a simplified and intuitive way to visualize gaze responses to dynamic 2D stimuli. Graphical user interfaces are presented in detail along with comprehensive explanations how to take the best advantage of the functionality provided by the built-in analysis and visualization tools
    corecore