47 research outputs found

    Dummy eye measurements of microsaccades: testing the influence of system noise and head movements on microsaccade detection in a popular video-based eye tracker

    Get PDF
    Whereas early studies of microsaccades have predominantly relied on custom-built eye trackers and manual tagging of microsaccades, more recent work tends to use video-based eye tracking and automated algorithms for microsaccade detection. While data from these newer studies suggest that microsaccades can be reliably detected with video-based systems, this has not been systematically evaluated. I here present a method and data examining microsaccade detection in an often used video-based system (the Eyelink II system) and a commonly used detection algorithm (Engbert & Kliegl, 2003; Engbert & Mergenthaler, 2006). Recordings from human participants and those obtained using a pair of dummy eyes, mounted on a pair of glasses either worn by a human participant (i.e., with head motion) or a dummy head (no head motion) were compared. Three experiments were conducted. The first experiment suggests that when microsaccade measurements make use of the pupil detection mode, microsaccade detections in the absence of eye movements are sparse in the absence of head movements, but frequent with head movements (despite the use of a chin rest). A second experiment demonstrates that by using measurements that rely on a combination of corneal reflection and pupil detection, false microsaccade detections can be largely avoided as long as a binocular criterion is used. A third experiment examines whether past results may have been affected by possible incorrect detections due to small head movements. It shows that despite the many detections due to head movements, the typical modulation of microsaccade rate after stimulus onset is found only when recording from the participants’ eyes

    The new man and the new world the influence of Renaissance humanism on the explorers of the Italian era of discovery

    Get PDF
    In contemporary research, microsaccade detection is typically performed using the calibrated gaze-velocity signal acquired from a video-based eye tracker. To generate this signal, the pupil and corneal reflection (CR) signals are subtracted from each other and a differentiation filter is applied, both of which may prevent small microsaccades from being detected due to signal distortion and noise amplification. We propose a new algorithm where microsaccades are detected directly from uncalibrated pupil-, and CR signals. It is based on detrending followed by windowed correlation between pupil and CR signals. The proposed algorithm outperforms the most commonly used algorithm in the field (Engbert & Kliegl, 2003), in particular for small amplitude microsaccades that are difficult to see in the velocity signal even with the naked eye. We argue that it is advantageous to consider the most basic output of the eye tracker, i.e. pupil-, and CR signals, when detecting small microsaccades

    Investigating the relationship between microsaccades and oscillations in the human visual cortex

    Get PDF
    Neural oscillations play important roles in vision and attention. Most studies of oscillations use visual fixation to control the visual input. Small eye movements, called microsaccades, occur involuntarily ~ 1-2 times per second during fixation and they are also thought to play important roles in vision and attention. The aim of the work described in this thesis was to explore the relationship between microsaccades and oscillations in the human visual cortex. In Chapter 2, I describe how remote video eye tracking can be used to detect and characterize microsaccades during MEG recordings. Tracking based on the pupil position only, without corneal reflection, and with the participant’s head immobilized in the MEG dewar, resulted in high precision gaze tracking and enabled the following investigations. In Chapter 3, I investigated the relationship between induced visual gamma oscillations and microsaccades in a simple visual stimulation paradigm. I did not find evidence for the relationship. This finding supports the view that sustained gamma oscillations reflect local processing in cortical columns. In addition, early transient gamma response had a reduced amplitude on trials with microsaccades, however the exact nature of this effect will have to be determined in future studies. In Chapter 4, I investigated the relationship between alpha oscillations and microsaccades in covert spatial attention. I did not find evidence for a relationship between hemispheric lateralization of the alpha amplitude and the directional bias of microsaccades. I propose that microsaccades and alpha oscillations represent two independent attentional mechanisms - the former related to early attention shifting and the latter to maintaining sustained attention. In Chapter 5, I recorded, for the first time, microsaccade-related spectral responses. Immediately after their onset, microsaccades increased amplitude in theta and beta bands and this effect was modulated by stimulus type. Moreover, microsaccades reduced alpha amplitude ~ 0.3 s after their onset and this effect was independent of stimulus type. These results have important implications for the interpretation of the classical oscillatory effects in the visual cortex as well as for the role of microsaccades in vision and attention

    Deep into the Eyes: Applying Machine Learning to improve Eye-Tracking

    Get PDF
    Eye-tracking has been an active research area with applications in personal and behav- ioral studies, medical diagnosis, virtual reality, and mixed reality applications. Improving the robustness, generalizability, accuracy, and precision of eye-trackers while maintaining privacy is crucial. Unfortunately, many existing low-cost portable commercial eye trackers suffer from signal artifacts and a low signal-to-noise ratio. These trackers are highly depen- dent on low-level features such as pupil edges or diffused bright spots in order to precisely localize the pupil and corneal reflection. As a result, they are not reliable for studying eye movements that require high precision, such as microsaccades, smooth pursuit, and ver- gence. Additionally, these methods suffer from reflective artifacts, occlusion of the pupil boundary by the eyelid and often require a manual update of person-dependent parame- ters to identify the pupil region. In this dissertation, I demonstrate (I) a new method to improve precision while maintaining the accuracy of head-fixed eye trackers by combin- ing velocity information from iris textures across frames with position information, (II) a generalized semantic segmentation framework for identifying eye regions with a further extension to identify ellipse fits on the pupil and iris, (III) a data-driven rendering pipeline to generate a temporally contiguous synthetic dataset for use in many eye-tracking ap- plications, and (IV) a novel strategy to preserve privacy in eye videos captured as part of the eye-tracking process. My work also provides the foundation for future research by addressing critical questions like the suitability of using synthetic datasets to improve eye-tracking performance in real-world applications, and ways to improve the precision of future commercial eye trackers with improved camera specifications

    Motion tracking of iris features to detect small eye movements

    Get PDF
    The inability of current video-based eye trackers to reliably detect very small eye movements has led to confusion about the prevalence or even the existence of monocular microsaccades (small, rapid eye movements that occur in only one eye at a time). As current methods often rely on precisely localizing the pupil and/or corneal reflection on successive frames, current microsaccade-detection algorithms often suffer from signal artifacts and a low signal-to-noise ratio. We describe a new video-based eye tracking methodology which can reliably detect small eye movements over 0.2 degrees (12 arcmin) with very high confidence. Our method tracks the motion of iris features to estimate velocity rather than position, yielding a better record of microsaccades. We provide a more robust, detailed record of miniature eye movements by relying on more stable, higher-order features (such as local features of iris texture) instead of lower-order features (such as pupil center and corneal reflection), which are sensitive to noise and drift

    Detecting rare but relevant events in systems neuroscience

    Get PDF
    Animals actively move their sensory organs, often in a rhythmic manner, to gather information from the external environment. The movements performed to sense the world are often very subtle and hard to detect in recording devices. For instance, in the visual domain, eye movements with amplitudes smaller than a degree of visual angle can occur. These tiny movements, called microsaccades, are at the threshold of the resolution of most recording techniques and one could be tempted to ignore them when studying vision. Yet, they might play an important role in visual processing. My thesis shows that microsaccades should not be ignored, that an algorithm can detect them accurately, and that the same algorithm can be used to detect any other seemingly “petty” events that deserve to be detected among noisy signals. In the first part, we demonstrated that microsaccades have a long-lasting impact on visual processing. We designed behavioral experiments to probe visual detectability and reaction time for stimuli presented at various moments relative to microsaccade onset. By probing the behavioral performance at multiples time points, we could reconstruct a signal that revealed oscillations occurring during visual processing. These oscillations occurred in the beta and alpha range and were synchronized to microsaccade generation. Moreover, the oscillations were sequential, occurring as two pulses, one in each hemifield, depending on the direction of the microsaccade. We also found that microsaccades are associated with a long-lasting increase in contrast sensitivity for stimuli presented in the same hemifield than their direction. These discoveries were important because they demonstrated that visuomotor processing is almost never exempt from the impact of subtle, seemingly irrelevant, movement behaviors. The results therefore established the need for accurate detection of microsaccades and other potentially significant events in brain activity and behavior. We thus designed, in a second study, a deep neural network that performs human- level eye movements detection even in noisy eye traces. Our algorithm outperformed the state-of-the-art algorithm for eye movement detection as well as many commonly used algorithms. In a third study, we finally showed that our algorithm can be generalized to other types of signals by detecting complex spikes in extracellular recordings of cerebellar Purkinje cells. We demonstrated human-level detection of complex spikes, outperforming commonly used online algorithms. Furthermore, our approach also accurately estimated the duration of complex spikes, which provides important information about the coding of error in the cerebellum. Putting all of the above together, this thesis argues for a careful control of exploratory movements when studying sensory processing. It also provides the tools necessary to approach a problem that is common in many different fields of neuroscience: the detection of an event of interest in a noisy signal

    Multimodality during fixation – Part II: Evidence for multimodality in spatial precision-related distributions and impact on precision estimates

    Get PDF
    This paper is a follow-on to our earlier paper (Friedman, Lohr, Hanson, & Komogortsev, 2021), which focused on the multimodality of angular offsets.  This paper applies the same analysis to the measurement of spatial precision.  Following the literature, we refer these measurements as estimates of device precision, but, in fact, subject characteristics clearly affect the measurements.  One typical measure of the spatial precision of an eye-tracking device is the standard deviation (SD) of the position signals (horizontal and vertical) during a fixation.  The SD is a highly interpretable measure of spread if the underlying error distribution is unimodal and normal. However, in the context of an underlying multimodal distribution, the SD is less interpretable. We will present evidence that the majority of such distributions are multimodal (68-70% strongly multimodal).  Only 21-23% of position distributions were unimodal. We present an alternative method for measuring precision that is appropriate for both unimodal and multimodal distributions.  This alternative method produces precision estimates that are substantially smaller than classic measures.  We present illustrations of both unimodality and multimodality with either drift or a microsaccade present during fixation.  At present, these observations apply only to the EyeLink 1000, and the subjects evaluated herein

    Eye tracking: empirical foundations for a minimal reporting guideline

    Get PDF
    In this paper, we present a review of how the various aspects of any study using an eye tracker (such as the instrument, methodology, environment, participant, etc.) affect the quality of the recorded eye-tracking data and the obtained eye-movement and gaze measures. We take this review to represent the empirical foundation for reporting guidelines of any study involving an eye tracker. We compare this empirical foundation to five existing reporting guidelines and to a database of 207 published eye-tracking studies. We find that reporting guidelines vary substantially and do not match with actual reporting practices. We end by deriving a minimal, flexible reporting guideline based on empirical research (Section "empirically based minimal reporting guideline")
    corecore