Journal of Eye Movement Research
Not a member yet
    511 research outputs found

    Advancing dynamic-time warp techniques for correcting eye tracking data in reading source code

    No full text
    Background: Automated eye tracking data correction algorithms such as Dynamic-Time Warp always made a trade-off between the ability to handle regressions (jumps back) and distortions (fixation drift). At the same time, eye movement in code reading is characterized by non-linearity and regressions. Objective: In this paper, we present a family of hybrid algorithms that aim to handle both regressions and distortions with high accuracy. Method: Through simulations with synthetic data, we replicate known eye movement phenomena to assess our algorithms against Warp algorithm as a baseline.  Furthermore, we utilize two real datasets to evaluate the algorithms in correcting data from reading source code and see if the proposed algorithms generalize to correcting data from reading natural language text. Results: Our results demonstrate that most proposed algorithms match or outperform baseline Warp in correcting both synthetic and real data. Also, we show the prevalence of regressions in reading source code. Conclusion: Our results highlight our hybrid algorithms as an improvement to Dynamic-Time Warp in handling regressions

    Analysis of risk factors associated with pre-myopia among primary school students in the Mianyang Science City Area

    No full text
    Objectives To find out the prevalence rate of pre-myopia among primary school students in the Mianyang Science City Area, analyze its related risk factors, and thus provide a reference for local authorities to formulate policies on the prevention and control of myopia for primary school students. Methods From September to October 2021, Cluster sampling was adopted by our research group to obtain the vision levels of primary school students employing a diopter test in the Science City Area. In addition, questionnaires were distributed to help us find the risk factors associated with pre-myopia. Through the statistical analysis, we identify the main risk factors for pre-myopia and propose appropriate interventions. Results The prevalence rate of pre-myopia among primary school students in the Science  City Area was 45.27% (1020/2253), of which 43.82% were boys and 46.92% were girls, with no statistically significant difference in the prevalence rate of myopia between boys and girls (2 =2.171, P=0.141). The results of the linear trend test showed that the prevalence rate of pre-myopia tends to decrease with increasing age (Z=296.521, P=0.000). Logistic regression analysis demonstrated that the main risk factors for pre-myopia were having at least one parent with myopia, spending less than 2 hours a day outdoors, using the eyes continuously for more than 1 hour, looking at electronic screens for more than 2 hours, and having an improper reading and writing posture. Conclusion The Science City Area has a high prevalence rate of pre-myopia among primary school students. It is proposed that students, schools, families, and local authorities work together to increase the time spent outdoors, reduce digital screens and develop scientific use of eye habits

    Dynamics of eye dominance behavior in virtual reality

    No full text
    Prior research has shown that sighting eye dominance is a dynamic behavior and dependent on horizontal viewing angle. Virtual reality (VR) offers high flexibility and control for studying eye movement and human behavior, yet eye dominance has not been given significant attention within this domain. In this work, we replicate Khan and Crawford’s (2001) original study in VR to confirm their findings within this specific context. Additionally, this study extends its scope to study alignment with objects presented at greater depth in the visual field. Our results align with previous results, remaining consistent when targets are presented at greater distances in the virtual scene. Using greater target distances presents opportunities to investigate alignment with objects at varying depths, providing greater flexibility for the design of methods that infer eye dominance from interaction in VR

    Intelligent evaluation method for design education and comparison research between visualizing heat-maps of class activation and eye-movement

    No full text
    The evaluation of design results plays a crucial role in the development of design. This study presents a design work evaluation system for design education that assists design instructors in conducting objective evaluations. An automatic design evaluation model based on convolutional neural networks has been established, which enables intelligent evaluation of student design works. During the evaluation process, the CAM is obtained. Simultaneously, an eye-tracking experiment was designed to collect gaze data and generate eye-tracking heat maps. By comparing the heat maps with CAM, an attempt was made to explore the correlation between the focus of the evaluation’s attention on human design evaluation and the CNN intelligent evaluation. The experimental results indicate that there is some certain correlation between humans and CNN in terms of the key points they focus on when conducting an evaluation. However, there are significant differences in background observation. The research results demonstrate that the intelligent evaluation model of CNN can automatically evaluate product design works and effectively classify and predict design product images. The comparison shows a correlation between artificial intelligence and the subjective evaluation of human eyes in evaluation strategy. Introducing artificial intelligence into the field of design evaluation for education has a strong potential to promote the development of design education

    Computational approaches to apply the String Edit Algorithm to create accurate visual scan paths

    No full text
    Eye movement detection algorithms (e.g., I-VT) require the selection of thresholds to identify eye fixations and saccadic movements from gaze data. The choice of threshold is important, as thresholds too low or large may fail to accurately identify eye fixations and saccades. An inaccurate threshold might also affect the resulting visual scan path, the time-ordered sequence of eye fixations and saccades, carried out by the participant. Commonly used approaches to evaluate threshold accuracy can be manually laborious, or require information about the expected visual scan paths of participants, which might not be available. To address this issue, we propose two different computational approaches, labeled as “between-participants comparisons” and “within-participants comparisons.” The approaches were evaluated using the open-source Gazebase dataset, which contained a bullseye-target tracking task, where participants were instructed to follow the movements of a bullseye-target. The predetermined path of the bullseye-target enabled us to evaluate our proposed approaches against the expected visual scan path. The approaches identified threshold values (220°/s and 210°/s) that were 83% similar to the expected visual scan path, outperforming a 30°/s benchmark threshold (41.5%). These methods might assist researchers in identifying accurate threshold values for the I-VT algorithm or potentially other eye movement detection algorithms

    Quantifying dwell time with location-based augmented reality: Dynamic AOI analysis on mobile eye tracking data with vision transformer

    No full text
    Mobile eye tracking captures egocentric vision and is well-suited for naturalistic studies. However, its data is noisy, especially when acquired outdoor with multiple participants over several sessions. Area of interest analysis on moving targets is difficult because A) camera and objects move nonlinearly and may disappear/reappear from the scene; and B) off-the-shelf analysis tools are limited to linearly moving objects. As a result, researchers resort to time-consuming manual annotation, which limits the use of mobile eye tracking in naturalistic studies. We introduce a method based on a fine-tuned Vision Transformer (ViT) model for classifying frames with overlaying gaze markers. After fine-tuning a model on a manually labelled training set made of 1.98% (=7845 frames) of our entire data for three epochs, our model reached 99.34% accuracy as evaluated on hold-out data. We used the method to quantify participants’ dwell time on a tablet during the outdoor user test of a mobile augmented reality application for biodiversity education. We discuss the benefits and limitations of our approach and its potential to be applied to other contexts

    Gender selection dilemma in FMCG advertising: Insights from eye-tracking research

    No full text
    Selecting the gender of a celebrity for fast-moving consumer goods (FMCG) advertising presents a strategic challenge. Previous research has predominantly concentrated on comparing celebrity spokespersons with non-celebrities, frequently neglecting the intricate distinctions in the effectiveness of male versus female endorsers. This study addresses this research gap by employing both traditional and neuromarketing methodologies. By integrating eye-tracking technology via RealEye and questionnaire-based surveys, the results indicate that female celebrities are more effective in capturing visual attention, whereas male celebrities are more effective in enhancing perceived trustworthiness. These findings are pivotal for both academic research and commercial strategy, as they elucidate the optimal selection of celebrity gender for maximizing FMCG advertising efficacy

    Accounting for visual field abnormalities when using eye-tracking to diagnose reading problems in neurological degeneration

    No full text
    State-of-the-art eye trackers provide valuable information for diagnosing reading problems by measuring and interpreting people’s gaze paths as they read through text. Abnormal conditions such as visual field defects, however, can seriously confound most of today’s existing methods for interpreting reading gaze patterns. Our objective was to research how visual field defects impact reading gaze path patterns, so the effects of such neurological pathologies can be explicitly incorporated into more comprehensive reading diagnosis methodologies. A cross-sectional, non-randomized, pilot clinical study including 45 patients with various neurologic disorders and 30 normal controls was designed. Participants underwent ophthalmologic/neuropsychologic and eye-tracker examinations using two reading tests of words and numbers. The results showed that the use of the eye tracker showed that patients with brain damage and an altered visual field require more time to complete a reading-text test by fixating a greater number of times (p < 0.001); with longer fixations (p = 0.03); and a greater number of saccades in these patients (p = 0.04). Our study showed objective differences in eye movement characteristics in patients with neurological diseases and an altered visual field who complained of reading difficulties. These findings should be considered as a bias factor and deserve further investigation

    Persistence of primitive reflexes associated with asymmetries in fixation and ocular motility values

    No full text
    This cross-sectional study examined eye movement performance in patients aged 4 to 16 years. Measurements of eye movements were obtained before and after performing therapy for inhibition of four primitive reflexes, asymmetric tonic neck reflex, symmetric tonic neck reflex, labyrinthine tonic reflex and Moro reflex. Subsequently the scores of the four primitive reflexes were compared with the results of five variables: fixation maintenance, % mean saccade size, motility excursions, fixations during excursions and mean duration of fixations. The comparisons showed a significant reduction in evidence of fixation maintenance as well as mean saccade size due to the inhibition of the four primitive reflexes. There was also a significant increase in ocular motility while fixations per saccade and average duration of fixations also decreased significantly. Visual balance between values of both eyes improved in all tests. A device called VisagraphTM III, which measures eye movements, was used for data collection. These results suggest that the oculomotor improvements reflect the involvement of other maturational processes such as the emergence and inhibition of primitive reflexes, the whole reorganization being key to future reading and attentional processe

    Potential of a laser pointer contact lens to improve the reliability of video-based eye-trackers in indoor and outdoor conditions

    No full text
    Many video-based eye trackers rely on detecting and tracking ocular features, a task that can be negatively affected by a number of individual or environmental factors. In this context, the aim of this study was to practically evaluate how the use of a scleral contact lens with two integrated near-infrared lasers (denoted CLP) could improve the tracking robustness in difficult lighting conditions, particularly outdoor ones.  We assessed the ability of the CLP (on a model eye) to detect the lasers and to deduce a gaze position with an accuracy better than 1° under four lighting conditions (1 lx, 250 lx, 50 klux and alternating 1lx /250 lx) on an artificial eye.  These results were compared to the ability of a commercial eye tracker (Pupil Core) to detect the pupil on human eyes with a confidence score equal to or greater than 0.9. CLP provided good results in all conditions (tracking accuracy and detection rates). In comparison, the Pupil Core performed well in all indoor conditions (99% detection) but failed in outdoor conditions (9.85% detection). In conclusion, the CLP presents strong potential to improve the reliability of video-based eye-trackers in outdoor conditions by providing easy trackable feature

    463

    full texts

    511

    metadata records
    Updated in last 30 days.
    Journal of Eye Movement Research
    Access Repository Dashboard
    Do you manage Open Research Online? Become a CORE Member to access insider analytics, issue reports and manage access to outputs from your repository in the CORE Repository Dashboard! 👇