17,257 research outputs found

    Time Course and Hazard Function: A Distributional Analysis of Fixation Duration in Reading

    Get PDF
    Reading processes affect not only the mean of fixation duration but also its distribution function. This paper introduces a set of hypotheses that link the timing and strength of a reading process to the hazard function of a fixation duration distribution. Analyses based on large corpora of reading eye movements show a surprisingly robust hazard function across languages, age, individual differences, and a number of processing variables. The data suggest that eye movements are generated stochastically based on a stereotyped time course that is independent of reading variables. High-level reading processes, however, modulate eye movement programming by increasing or decreasing the momentary saccade rate during a narrow time window. Implications to theories and analyses of reading eye movement are discussed

    Individual Topology Structure of Eye Movement Trajectories

    Full text link
    Traditionally, extracting patterns from eye movement data relies on statistics of different macro-events such as fixations and saccades. This requires an additional preprocessing step to separate the eye movement subtypes, often with a number of parameters on which the classification results depend. Besides that, definitions of such macro events are formulated in different ways by different researchers. We propose an application of a new class of features to the quantitative analysis of personal eye movement trajectories structure. This new class of features based on algebraic topology allows extracting patterns from different modalities of gaze such as time series of coordinates and amplitudes, heatmaps, and point clouds in a unified way at all scales from micro to macro. We experimentally demonstrate the competitiveness of the new class of features with the traditional ones and their significant synergy while being used together for the person authentication task on the recently published eye movement trajectories dataset

    Learning from Teacher's Eye Movement: Expertise, Subject Matter and Video Modeling

    Full text link
    How teachers' eye movements can be used to understand and improve education is the central focus of the present paper. Three empirical studies were carried out to understand the nature of teachers' eye movements in natural settings and how they might be used to promote learning. The studies explored 1) the relationship between teacher expertise and eye movement in the course of teaching, 2) how individual differences and the demands of different subjects affect teachers' eye movement during literacy and mathematics instruction, 3) whether including an expert's eye movement and hand information in instructional videos can promote learning. Each study looked at the nature and use of teacher eye movements from a different angle but collectively converge on contributions to answering the question: what can we learn from teachers' eye movements? The paper also contains an independent methodology chapter dedicated to reviewing and comparing methods of representing eye movements in order to determine a suitable statistical procedure for representing the richness of current and similar eye tracking data. Results show that there are considerable differences between expert and novice teachers' eye movement in a real teaching situation, replicating similar patterns revealed by past studies on expertise and gaze behavior in athletics and other fields. This paper also identified the mix of person-specific and subject-specific eye movement patterns that occur when the same teacher teaches different topics to the same children. The final study reports evidence that eye movement can be useful in teaching; by showing increased learning when learners saw an expert model's eye movement in a video modeling example. The implications of these studies regarding teacher education and instruction are discussed.PHDEducation & PsychologyUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/145853/1/yizhenh_1.pd

    Comparing infrared and webcam eye tracking in the Visual World Paradigm

    Get PDF
    Visual World eye tracking is a temporally fine-grained method of monitoring attention, making it a popular tool in the study of online sentence processing. Recently, while infrared eye tracking was mostly unavailable, various web-based experiment platforms have rapidly developed webcam eye tracking functionalities, which are now in urgent need of testing and evaluation. We replicated a recent Visual World study on the incremental processing of verb aspect in English using ‘out of the box’ webcam eye tracking software (jsPsych; de Leeuw, 2015) and crowdsourced participants, and fully replicated both the offline and online results of the original study. We furthermore discuss factors influencing the quality and interpretability of webcam eye tracking data, particularly with regards to temporal and spatial resolution; and conclude that remote webcam eye tracking can serve as an affordable and accessible alternative to lab-based infrared eye tracking, even for questions probing the time-course of language processing

    Event-based neuromorphic stereo vision

    Full text link

    A review of rapid serial visual presentation-based brain-computer interfaces

    Get PDF
    International audienceRapid serial visual presentation (RSVP) combined with the detection of event related brain responses facilitates the selection of relevant information contained in a stream of images presented rapidly to a human. Event related potentials (ERPs) measured non-invasively with electroencephalography (EEG) can be associated with infrequent targets amongst a stream of images. Human-machine symbiosis may be augmented by enabling human interaction with a computer, without overt movement, and/or enable optimization of image/information sorting processes involving humans. Features of the human visual system impact on the success of the RSVP paradigm, but pre-attentive processing supports the identification of target information post presentation of the information by assessing the co-occurrence or time-locked EEG potentials. This paper presents a comprehensive review and evaluation of the limited but significant literature on research in RSVP-based brain-computer interfaces (BCIs). Applications that use RSVP-based BCIs are categorized based on display mode and protocol design, whilst a range of factors influencing ERP evocation and detection are analyzed. Guidelines for using the RSVP-based BCI paradigms are recommended, with a view to further standardizing methods and enhancing the inter-relatability of experimental design to support future research and the use of RSVP-based BCIs in practice

    Eye Movement and Pupil Measures: A Review

    Get PDF
    Our subjective visual experiences involve complex interaction between our eyes, our brain, and the surrounding world. It gives us the sense of sight, color, stereopsis, distance, pattern recognition, motor coordination, and more. The increasing ubiquity of gaze-aware technology brings with it the ability to track gaze and pupil measures with varying degrees of fidelity. With this in mind, a review that considers the various gaze measures becomes increasingly relevant, especially considering our ability to make sense of these signals given different spatio-temporal sampling capacities. In this paper, we selectively review prior work on eye movements and pupil measures. We first describe the main oculomotor events studied in the literature, and their characteristics exploited by different measures. Next, we review various eye movement and pupil measures from prior literature. Finally, we discuss our observations based on applications of these measures, the benefits and practical challenges involving these measures, and our recommendations on future eye-tracking research directions

    Mobile brain/body imaging of landmark-based navigation with high-density EEG.

    Full text link
    Coupling behavioral measures and brain imaging in naturalistic, ecological conditions is key to comprehend the neural bases of spatial navigation. This highly integrative function encompasses sensorimotor, cognitive, and executive processes that jointly mediate active exploration and spatial learning. However, most neuroimaging approaches in humans are based on static, motion-constrained paradigms and they do not account for all these processes, in particular multisensory integration. Following the Mobile Brain/Body Imaging approach, we aimed to explore the cortical correlates of landmark-based navigation in actively behaving young adults, solving a Y-maze task in immersive virtual reality. EEG analysis identified a set of brain areas matching state-of-the-art brain imaging literature of landmark-based navigation. Spatial behavior in mobile conditions additionally involved sensorimotor areas related to motor execution and proprioception usually overlooked in static fMRI paradigms. Expectedly, we located a cortical source in or near the posterior cingulate, in line with the engagement of the retrosplenial complex in spatial reorientation. Consistent with its role in visuo-spatial processing and coding, we observed an alpha-power desynchronization while participants gathered visual information. We also hypothesized behavior-dependent modulations of the cortical signal during navigation. Despite finding few differences between the encoding and retrieval phases of the task, we identified transient time-frequency patterns attributed, for instance, to attentional demand, as reflected in the alpha/gamma range, or memory workload in the delta/theta range. We confirmed that combining mobile high-density EEG and biometric measures can help unravel the brain structures and the neural modulations subtending ecological landmark-based navigation

    Effect of Dravidian vernacular, English and Hindi during onscreen reading text: A physiological, subjective and objective evaluation study

    Get PDF
    Multilingualism has become an integral part of our present lifestyle. India has twenty two registered official languages with English and Hindi being most widely used for all official activities across the nation. As both these languages are introduced later in life, it was hypothesised that comprehensive reading will be better and faster if the native medium was used. Therefore present study aimed to evaluate the differences in performance while using one of the four Indian Dravidian vernaculars (Tamil, Telugu Kannada and Malayalam) and two non-vernacular (English and Hindi) languages for onscreen reading task. A multi-dimensional approach including physiological (Eye movement recording), subjective (Language Experience And Proficiency Questionnaire, LEAP-Q, Legibility rating) and Objective (Reading time and Word processing rate) measurements were used to quantify the effects. Forty-four Indian infantry soldiers from each of the Dravidian language groups participated in the study. Volunteers read aloud two simple story passages onscreen in their respective vernacular and non-vernacular languages using both time bound and self-paced reading mode. Reading time was lower and word processing rate was higher respectively in case of vernacular than non-vernacular. Consideration of fixation count in both the modes of reading indicated better performance with vernaculars. Legibility score was better in Dravidian languages than others. Results indicated that reading text was faster in vernacular media followed by English and Hindi. Use of vernaculars in onscreen text display of high density workstation may therefore be recommended for easier and faster comprehensio
    • …
    corecore