501 research outputs found
In search of oculomotor capture during film viewing: implications for the balance of top-down and bottom-up control in the saccadic system
In the laboratory, the abrupt onset of a visual distractor can generate an involuntary orienting
response: this robust oculomotor capture effect has been reported in a large number of studies
(e.g. Theeuwes, Kramer, Hahn, & Irwin, 1998; Ludwig & Gilchrist, 2002) suggesting it may
be a ubiquitous part of more natural visual behaviour. However the visual stimuli used in
these experiments have tended to be static and had none of the complexity, and dynamism of
more natural visual environments. In addition, the primary task in the laboratory (typically
visual search) can be tedious for the participants with participant’s losing interest and
becoming stimulus driven and more easily distracted. Both of these factors may have led to
an overestimation of the extent to which oculomotor capture occurs and the importance of
this phenomena in everyday visual behaviour. To address this issue, in the current series of
studies we presented abrupt and highly salient visual distractors away from fixation while
participants watched a film. No evidence of oculomotor capture was found. However, the
distractor does effect fixation duration: we find an increase in fixation duration analogous to
the remote distractor effect (Walker, Deubel, & Schneider & Findlay, 1997). These results
suggest that during dynamic scene perception, the oculomotor system may be under far more
top-down control than traditional laboratory based-tasks have previously suggested
Multimodality with Eye tracking and Haptics: A New Horizon for Serious Games?
The goal of this review is to illustrate the emerging use of multimodal virtual reality that can benefit learning-based games. The review begins with an introduction to multimodal virtual reality in serious games and we provide a brief discussion of why cognitive processes involved in learning and training are enhanced under immersive virtual environments. We initially outline studies that have used eye tracking and haptic feedback independently in serious games, and then review some innovative applications that have already combined eye tracking and haptic devices in order to provide applicable multimodal frameworks for learning-based games. Finally, some general conclusions are identified and clarified in order to advance current understanding in multimodal serious game production as well as exploring possible areas for new applications
Putting culture under the spotlight reveals universal information use for face recognition
Background: Eye movement strategies employed by humans to identify conspecifics are not universal. Westerners predominantly fixate the eyes during face recognition, whereas Easterners more the nose region, yet recognition accuracy is comparable. However, natural fixations do not unequivocally represent information extraction. So the question of whether humans universally use identical facial information to recognize faces remains unresolved. Methodology/Principal Findings: We monitored eye movements during face recognition of Western Caucasian (WC) and East Asian (EA) observers with a novel technique in face recognition that parametrically restricts information outside central vision. We used ‘Spotlights’ with Gaussian apertures of 2°, 5° or 8° dynamically centered on observers’ fixations. Strikingly, in constrained Spotlight conditions (2°, 5°) observers of both cultures actively fixated the same facial information: the eyes and mouth. When information from both eyes and mouth was simultaneously available when fixating the nose (8°), as expected EA observers shifted their fixations towards this region. Conclusions/Significance: Social experience and cultural factors shape the strategies used to extract information from faces, but these results suggest that external forces do not modulate information use. Human beings rely on identical facial information to recognize conspecifics, a universal law that might be dictated by the evolutionary constraints of nature and not nurture
Recommended from our members
Using eye movements to detect visual field loss: a pragmatic assessment using simulated scotoma.
Glaucoma is a leading cause of irreversible sight-loss and has been shown to affect natural eye-movements. These changes may provide a cheap and easy-to-obtain biomarker for improving disease detection. Here, we investigated whether these changes are large enough to be clinically useful. We used a gaze-contingent simulated visual field (VF) loss paradigm, in which participants experienced a variable magnitude of simulated VF loss based on longitudinal data from a real glaucoma patient (thereby controlling for other variables, such as age and general health). Fifty-five young participants with healthy vision were asked to view two short videos and three pictures, either with: (1) no VF loss, (2) moderate VF loss, or (3) advanced VF loss. Eye-movements were recorded using a remote eye tracker. Key eye-movement parameters were computed, including saccade amplitude, the spread of saccade endpoints (bivariate contour ellipse area), location of saccade landing positions, and similarity of fixations locations among participants (quantified using kernel density estimation). The simulated VF loss caused some statistically significant effects in the eye movement parameters. Yet, these effects were not capable of consistently identifying simulated VF loss, despite it being of a magnitude likely easily detectable by standard automated perimetry
Sampling rate influences saccade detection in mobile eye tracking of a reading task
The purpose of this study was to compare saccade detection characteristics in two mobile eye trackers with different sampling rates in a natural task. Gaze data of 11 participants were recorded in one 60 Hz and one 120 Hz mobile eye tracker and compared directly to the saccades detected by a 1000 HZ stationary tracker while a reading task was performed. Saccades and fixations were detected using a velocity based algorithm and their properties analyzed. Results showed that there was no significant difference in the number of detected fixations but mean fixation durations differed between the 60 Hz mobile and the stationary eye tracker. The 120 Hz mobile eye tracker showed a significant increase in the detection rate of saccades and an improved estimation of the mean saccade duration, compared to the 60 Hz eye tracker. To conclude, for the detection and analysis of fast eye movements, such as saccades, it is better to use a 120 Hz mobile eye tracker
Change blindness: eradication of gestalt strategies
Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task
An Augmentative Gaze Directing Framework for Multi-Spectral Imagery
Modern digital imaging techniques have made the task of imaging more prolic than ever and the volume of images and data available through multi-spectral imaging methods for exploitation is exceeding that which can be solely processed by human beings. The researchers proposed and developed a novel eye movement contingent framework and display
system through adaption of the demonstrated technique of subtle gaze direction by presenting modulations within the displayed image. The system sought to augment visual search task performance of aerial imagery by incorporating multi-spectral image processing algorithms to determine potential regions of interest within an image. The exploratory work conducted was to study the feasibility of visual gaze direction with the specic intent of extending this application to geospatial image analysis without need for overt cueing to areas of potential interest and thereby maintaining the benefits of an undirected and unbiased search by an observer
An informatics system for exploring eye movements in reading
Eye tracking techniques have been widely used in many research areas including
cognitive science, psychology, human-computer interaction, marketing research,
medical research etc. Many computer programs have emerged to help these
researchers to design experiments, present visual stimuli and process the large
quantity of numerical data produced by the eye tracker. However, most applications,
especially commercial products, are designed for a particular tracking device
and tend to be general purpose. Few of them are designed specifically for
reading research. This can be inconvenient when dealing with complex experimental
design, multi-source data collection, and text based data analysis, including
almost every aspect of a reading study lifecycle.
A flexible and powerful system that manages the lifecycle of different reading
studies is required to fulfill these demands. Therefore, we created an informatics
system with two major software suites: Experiment Executor and EyeMap. It
is a system designed specifically for reading research. Experiment Executor
helps reading researchers build complex experimental environments, which can
rapidly present display changes and support the co-registration of eye tracking
information with other data collection devices such as EEG (electroencephalography)
amplifiers. The EyeMap component helps researchers visualize and analysis
a wide range of writing systems including spaced and unspaced scripts, which
can be presented in proportional or non-proportional font types. The aim of the
system is to accelerate the life cycle of a reading experiment from design through
analysis.
Several experiments were conducted on this system. These experiments
confirmed the effectiveness and the capability of the system with several new
reading research findings from the visual information processing stages of reading
Attentional Window Set by Expected Relevance of Environmental Signals
The existence of an attentional window—a limited region in visual space at which attention is directed—has been invoked to explain why sudden visual onsets may or may not capture overt or covert attention. Here, we test the hypothesis that observers voluntarily control the size of this attentional window to regulate whether or not environmental signals can capture attention. We have used a novel approach to test this: participants eye-movements were tracked while they performed a search task that required dynamic gaze-shifts. During the search task, abrupt onsets were presented that cued the target positions at different levels of congruency. The participant knew these levels. We determined oculomotor capture efficiency for onsets that appeared at different viewing eccentricities. From these, we could derive the participant's attentional window size as a function of onset congruency. We find that the window was small during the presentation of low-congruency onsets, but increased monotonically in size with an increase in the expected congruency of the onsets. This indicates that the attentional window is under voluntary control and is set according to the expected relevance of environmental signals for the observer's momentary behavioral goals. Moreover, our approach provides a new and exciting method to directly measure the size of the attentional window
Eye movements in children during reading : a review
L’analyse des mouvements des yeux a été très largement utilisée ces dernières années en psychologie de la lecture pour mieux rendre compte des traitements cognitifs sous-jacents. Cependant, cette technique a été jusqu’à présent peu utilisée dans une perspective développementale, pour mieux comprendre les processus d’apprentissage et de la lecture chez les enfants. Ce chapitre présente une brève revue des études qui ont comparé les mouvements des yeux chez les enfants à ceux des adultes. Nous y décrivons dans un premier temps ce qui distingue les patterns oculaires de ces deux populations, et présentons ensuite les résultats d’études tentant d’expliquer ces différences en terme de contraintes oculomotrices, visuelles et linguistiques.Abstract : Over the last decades, the analysis of eye movements has proven very useful to investigate the cognitive processes underlying reading. However, from a developmental perspective, this technique has yet hardly been used to better understand the children’s acquisition of reading. This chapter aims at presenting a review of the studies comparing the eye-movement patterns observed in children with those observed in adult readers. Firstly, it presents the differences and similarities in eye-movement patterns between those two groups, and then it proposes different attempts at explaining these differences in terms of oculomotor, visual and linguistic constraints
- …