273 research outputs found

    Calibration-free gaze interfaces based on linear smooth pursuit

    Get PDF
    Since smooth pursuit eye movements can be used without calibration in spontaneous gaze interaction, the intuitiveness of the gaze interface design has been a topic of great interest in the human-computer interaction field. However, since most related research focuses on curved smooth-pursuit trajectories, the design issues of linear trajectories are poorly understood. Hence, this study evaluated the user performance of gaze interfaces based on linear smooth pursuit eye movements. We conducted an experiment to investigate how the number of objects (6, 8, 10, 12, or 15) and object moving speed (7.73 ˚/s vs. 12.89 ˚/s) affect the user performance in a gaze-based interface. Results show that the number and speed of the displayed objects influence users’ performance with the interface. The number of objects significantly affected the correct and false detection rates when selecting objects in the display. Participants’ performance was highest on interfaces containing 6 and 8 objects and decreased for interfaces with 10, 12, and 15 objects. Detection rates and orientation error were significantly influenced by the moving speed of displayed objects. Faster moving speed (12.89 ˚/s) resulted in higher detection rates and smaller orientation error compared to slower moving speeds (7.73 ˚/s). Our findings can help to enable a calibration-free accessible interaction with gaze interfaces.DFG, 414044773, Open Access Publizieren 2019 - 2020 / Technische Universität Berli

    Work, aging, mental fatigue, and eye movement dynamics

    Get PDF

    Evaluation of head-free eye tracking as an input device for air traffic control

    Get PDF
    International audienceThe purpose of this study was to investigate the possibility to integrate a free head motion eye-tracking system as input device in air traffic control (ATC) activity. Sixteen participants used an eye tracker to select targets displayed on a screen as quickly and accurately as possible. We assessed the impact of the presence of visual feedback about gaze position and the method of target selection on selection performance under different difficulty levels induced by variations in target size and target-to-target separation. We tend to consider that the combined use of gaze dwell-time selection and continuous eye-gaze feedback was the best condition as it suits naturally with gaze displacement over the ATC display and free the hands of the controller, despite a small cost in terms of selection speed. In addition, target size had a greater impact on accuracy and selection time than target distance. These findings provide guidelines on possible further implementation of eye tracking in ATC everyday activity

    Modulation of Saccadic Curvature by Spatial Memory and Associative Learning

    Get PDF
    The way the eye travels during a saccade typically does not follow a straight line but rather shows some curvature instead. Converging empirical evidence has demonstrated that curvature results from conflicting saccade goals when multiple stimuli in the visual periphery compete for selection as the saccade target (Van der Stigchel, Meeter, & Theeuwes, 2006). Curvature away from a competing stimulus has been proposed to result from the inhibitory deselection of the motor program representing the saccade towards that stimulus (Sheliga, Riggio, & Rizzolatti, 1994; Tipper, Howard, & Houghton, 2000). For example, if participants are instructed to perform a saccade towards a defined target stimulus and to ignore a simultaneously presented nearby distractor stimulus, a saccade landing on the target typically exhibits curvature away from the distractor (e. g. Doyle & Walker, 2001). The present thesis reports how trajectories of saccadic eye movements are affected by spatial memory and associative learning. The final objective was to explore if the curvature effect can be used to investigate associative learning in an experimental paradigm where competing saccade targets are retrieved from associative memory rather than being sensory events. The thesis incorporates manuscripts on the following working steps to accomplish this objective: The first manuscript presents the computer software that was written in order to derive measure of saccadic curvature from the recorded eye movement traces. The second manuscript replicates and extends prior reports on the effect of (non-associative) spatial working memory on saccade deviations (Theeuwes, Olivers, & Chizk, 2005). The third manuscript uses a novel associative learning task to demonstrate that changes in saccadic curvature during associative learning comply with the acquisition and extinction of competing associations as predicted by the Rescorla-Wagner model (Rescorla & Wagner, 1972), originally put forward to explain classical conditioning in animals

    Nineteenth Annual Conference on Manual Control

    Get PDF
    No abstract availabl

    SMOOVS: Towards calibration-free text entry by gaze using smooth pursuit movements

    Get PDF
    Gaze-based text spellers have proved useful for people with severe motor diseases, but lack acceptance in general human-computer interaction. In order to use gaze spellers for public displays, they need to be robust and provide an intuitive interaction concept. However, traditional dwell- and blink-based systems need accurate calibration which contradicts fast and intuitive interaction. We developed the first gaze speller explicitly utilizing smooth pursuit eye movements and their particular characteristics. The speller achieves sufficient accuracy with a one-point calibration and does not require extensive training. Its interface consists of character elements which move apart from each other in two stages. As each element has a unique track, gaze following this track can be detected by an algorithm that does not rely on the exact gaze coordinates and compensates latency-based artefacts. In a user study, 24 participants tested four speed-levels of moving elements to determine an optimal interaction speed. At 300 px/s users showed highest overall performance of 3.34 WPM (without training). Subjective ratings support the finding that this pace is superior

    A review of rapid serial visual presentation-based brain-computer interfaces

    Get PDF
    International audienceRapid serial visual presentation (RSVP) combined with the detection of event related brain responses facilitates the selection of relevant information contained in a stream of images presented rapidly to a human. Event related potentials (ERPs) measured non-invasively with electroencephalography (EEG) can be associated with infrequent targets amongst a stream of images. Human-machine symbiosis may be augmented by enabling human interaction with a computer, without overt movement, and/or enable optimization of image/information sorting processes involving humans. Features of the human visual system impact on the success of the RSVP paradigm, but pre-attentive processing supports the identification of target information post presentation of the information by assessing the co-occurrence or time-locked EEG potentials. This paper presents a comprehensive review and evaluation of the limited but significant literature on research in RSVP-based brain-computer interfaces (BCIs). Applications that use RSVP-based BCIs are categorized based on display mode and protocol design, whilst a range of factors influencing ERP evocation and detection are analyzed. Guidelines for using the RSVP-based BCI paradigms are recommended, with a view to further standardizing methods and enhancing the inter-relatability of experimental design to support future research and the use of RSVP-based BCIs in practice

    Oculomotor responses and 3D displays

    Get PDF
    This thesis investigated some of the eye movement factors related to the development and use of eye pointing devices with three dimensional displays (stereoscopic and linear perspective). In order for eye pointing to be used as a successful device for input-control of a 3D display it is necessary to characterise the accuracy and speed with which the binocular point of foveation can locate a particular point in 3D space. Linear perspective was found to be insufficient to elicit a change in the depth of the binocular point of fixation except under optimal conditions (monocular viewing, accommodative loop open and constant display paradigm). Comparison of the oculomotor responses made between a stereoscopic 'virtual' and a 'real' display showed there were no differences with regards to target fixational accuracy. With one exception, subjects showed the same degree of fixational accuracy with respect to target direction and depth. However, close target proximity (in terms of direction) affected the accuracy of fixation with respect to depth (but not direction). No differences were found between fixational accuracy of large and small targets under either display conditions. The visual conditions eliciting fast changes in the location of the binocular point of foveation, i.e. saccade disconjugacy, were investigated. Target-directed saccade disconjugacy was confirmed, in some cases, between targets presented at different depths on a stereoscopic display. However, in general the direction of saccade disconjugacy was best predicted by the horizontal direction of the target. Leftward saccade disconjugacy was more divergent than rightward. This asymmetry was overlaid on a disconjugacy response, which when considered in relative terms, was appropriated for the level of vergence demand. Linear perspective depth cues did not elicit target-directed disconjugate saccades

    An end-to-end review of gaze estimation and its interactive applications on handheld mobile devices

    Get PDF
    In recent years we have witnessed an increasing number of interactive systems on handheld mobile devices which utilise gaze as a single or complementary interaction modality. This trend is driven by the enhanced computational power of these devices, higher resolution and capacity of their cameras, and improved gaze estimation accuracy obtained from advanced machine learning techniques, especially in deep learning. As the literature is fast progressing, there is a pressing need to review the state of the art, delineate the boundary, and identify the key research challenges and opportunities in gaze estimation and interaction. This paper aims to serve this purpose by presenting an end-to-end holistic view in this area, from gaze capturing sensors, to gaze estimation workflows, to deep learning techniques, and to gaze interactive applications.PostprintPeer reviewe
    • …
    corecore