5 research outputs found

    SP-EyeGAN: Generating Synthetic Eye Movement Data with Generative Adversarial Networks

    Get PDF
    Neural networks that process the raw eye-tracking signal can outperform traditional methods that operate on scanpaths preprocessed into fixations and saccades. However, the scarcity of such data poses a major challenge. We, therefore, present SP-EyeGAN, a neural network that generates synthetic raw eye-tracking data. SP-EyeGAN consists of Generative Adversarial Networks; it produces a sequence of gaze angles indistinguishable from human micro- and macro-movements. We demonstrate how the generated synthetic data can be used to pre-train a model using contrastive learning. This model is fine-tuned on labeled human data for the task of interest. We show that for the task of predicting reading comprehension from eye movements, this approach outperforms the previous state-of-the-art

    複雑さに関連した特徴を用いた主観的印象予測

    Get PDF
    学位の種別: 課程博士審査委員会委員 : (主査)東京大学准教授 山﨑 俊彦, 東京大学教授 相澤 清晴, 国立情報学研究所教授 佐藤 真一, 東京大学教授 佐藤 洋一, 東京大学教授 苗村 健University of Tokyo(東京大学

    Saliency-based Bayesian Modeling of Dynamic Viewing of Static Scenes

    No full text
    Abstract Most analytic approaches for eye-tracking data focus either on identification of fixations and saccades, or on estimating saliency properties. Analyzing both aspects of visual attention simultaneously provides a more comprehensive view of strategies used to process information. This work presents a method that incorporates both aspects in a unified Bayesian model to jointly estimate dynamic properties of scanpaths and a saliency map. Performance of the model is assessed on simulated data and on eye-tracking data from 15 children with autism spectrum disorder and 13 control children. Saliency differences between ASD and TD groups were found for both social and non-social images, but differences in dynamic gaze features were evident in only a subset of social images. These results are consistent with previous region-based analyses as well as previous fixation parameter models, suggesting that the new approach may provide synthesizing and statistical perspectives on eye-tracking analyses

    Saliency-based Bayesian modeling of dynamic viewing of static scenes

    No full text
    corecore