306 research outputs found

    Pre-Trained Language Models Augmented with Synthetic Scanpaths for Natural Language Understanding

    Full text link
    Human gaze data offer cognitive information that reflects natural language comprehension. Indeed, augmenting language models with human scanpaths has proven beneficial for a range of NLP tasks, including language understanding. However, the applicability of this approach is hampered because the abundance of text corpora is contrasted by a scarcity of gaze data. Although models for the generation of human-like scanpaths during reading have been developed, the potential of synthetic gaze data across NLP tasks remains largely unexplored. We develop a model that integrates synthetic scanpath generation with a scanpath-augmented language model, eliminating the need for human gaze data. Since the model's error gradient can be propagated throughout all parts of the model, the scanpath generator can be fine-tuned to downstream tasks. We find that the proposed model not only outperforms the underlying language model, but achieves a performance that is comparable to a language model augmented with real human gaze data. Our code is publicly available.Comment: Pre-print for EMNLP 202

    Fairness in Oculomotoric Biometric Identification

    Full text link
    Gaze patterns are known to be highly individual, and therefore eye movements can serve as a biometric characteristic. We explore aspects of the fairness of biometric identification based on gaze patterns. We find that while oculomotoric identification does not favor any particular gender and does not significantly favor by age range, it is unfair with respect to ethnicity. Moreover, fairness concerning ethnicity cannot be achieved by balancing the training data for the best-performing model

    Pre-Trained Language Models Augmented with Synthetic Scanpaths for Natural Language Understanding

    Get PDF
    Human gaze data offer cognitive information that reflects natural language comprehension. Indeed, augmenting language models with human scanpaths has proven beneficial for a range of NLP tasks, including language understanding. However, the applicability of this approach is hampered because the abundance of text corpora is contrasted by a scarcity of gaze data. Although models for the generation of human-like scanpaths during reading have been developed, the potential of synthetic gaze data across NLP tasks remains largely unexplored. We develop a model that integrates synthetic scanpath generation with a scanpath-augmented language model, eliminating the need for human gaze data. Since the model’s error gradient can be propagated throughout all parts of the model, the scanpath generator can be fine-tuned to downstream tasks. We find that the proposed model not only outperforms the underlying language model, but achieves a performance that is comparable to a language model augmented with real human gaze data. Our code is publicly available

    Eyettention: An Attention-based Dual-Sequence Model for Predicting Human Scanpaths during Reading

    Full text link
    Eye movements during reading offer insights into both the reader's cognitive processes and the characteristics of the text that is being read. Hence, the analysis of scanpaths in reading have attracted increasing attention across fields, ranging from cognitive science over linguistics to computer science. In particular, eye-tracking-while-reading data has been argued to bear the potential to make machine-learning-based language models exhibit a more human-like linguistic behavior. However, one of the main challenges in modeling human scanpaths in reading is their dual-sequence nature: the words are ordered following the grammatical rules of the language, whereas the fixations are chronologically ordered. As humans do not strictly read from left-to-right, but rather skip or refixate words and regress to previous words, the alignment of the linguistic and the temporal sequence is non-trivial. In this paper, we develop Eyettention, the first dual-sequence model that simultaneously processes the sequence of words and the chronological sequence of fixations. The alignment of the two sequences is achieved by a cross-sequence attention mechanism. We show that Eyettention outperforms state-of-the-art models in predicting scanpaths. We provide an extensive within- and across-data set evaluation on different languages. An ablation study and qualitative analysis support an in-depth understanding of the model's behavior

    SP-EyeGAN: Generating Synthetic Eye Movement Data with Generative Adversarial Networks

    Get PDF
    Neural networks that process the raw eye-tracking signal can outperform traditional methods that operate on scanpaths preprocessed into fixations and saccades. However, the scarcity of such data poses a major challenge. We, therefore, present SP-EyeGAN, a neural network that generates synthetic raw eye-tracking data. SP-EyeGAN consists of Generative Adversarial Networks; it produces a sequence of gaze angles indistinguishable from human micro- and macro-movements. We demonstrate how the generated synthetic data can be used to pre-train a model using contrastive learning. This model is fine-tuned on labeled human data for the task of interest. We show that for the task of predicting reading comprehension from eye movements, this approach outperforms the previous state-of-the-art

    Selection of XAI Methods Matters: Evaluation of Feature Attribution Methods for Oculomotoric Biometric Identification

    Full text link
    Substantial advances in oculomotoric biometric identification have been made due to deep neural networks processing non-aggregated time series data that replace methods processing theoretically motivated engineered features. However, interpretability of deep neural networks is not trivial and needs to be thoroughly investigated for future eye tracking applications. Especially in medical or legal applications explanations can be required to be provided alongside predictions. In this work, we apply several attribution methods to a state of the art model for eye movement-based biometric identification. To asses the quality of the generated attributions, this work is focused on the quantitative evaluation of a range of established metrics. We find that Layer-wise Relevance Propagation generates the least complex attributions, while DeepLIFT attributions are the most faithful. Due to the absence of a correlation between attributions of these two methods we advocate to consider both methods for their potentially complementary attributions

    Bridging the Gap: Gaze Events as Interpretable Concepts to Explain Deep Neural Sequence Models

    Get PDF
    Recent work in XAI for eye tracking data has evaluated the suitability of feature attribution methods to explain the output of deep neural sequence models for the task of oculomotric biometric identification. These methods provide saliency maps to highlight important input features of a specific eye gaze sequence. However, to date, its localization analysis has been lacking a quantitative approach across entire datasets. In this work, we employ established gaze event detection algorithms for fixations and saccades and quantitatively evaluate the impact of these events by determining their concept influence. Input features that belong to saccades are shown to be substantially more important than features that belong to fixations. By dissecting saccade events into sub-events, we are able to show that gaze samples that are close to the saccadic peak velocity are most influential. We further investigate the effect of event properties like saccadic amplitude or fixational dispersion on the resulting concept influence

    Bridging the Gap: Gaze Events as Interpretable Concepts to Explain Deep Neural Sequence Models

    Full text link
    Recent work in XAI for eye tracking data has evaluated the suitability of feature attribution methods to explain the output of deep neural sequence models for the task of oculomotric biometric identification. These methods provide saliency maps to highlight important input features of a specific eye gaze sequence. However, to date, its localization analysis has been lacking a quantitative approach across entire datasets. In this work, we employ established gaze event detection algorithms for fixations and saccades and quantitatively evaluate the impact of these events by determining their concept influence. Input features that belong to saccades are shown to be substantially more important than features that belong to fixations. By dissecting saccade events into sub-events, we are able to show that gaze samples that are close to the saccadic peak velocity are most influential. We further investigate the effect of event properties like saccadic amplitude or fixational dispersion on the resulting concept influence.Comment: Preprint for ETRA '23: 2023 Symposium on Eye Tracking Research and Application

    Upgrade of the ultracold neutron source at the pulsed reactor TRIGA Mainz

    Full text link
    The performance of the upgraded solid deuterium ultracold neutron source at the pulsed reactor TRIGA Mainz is described. The current configuration stage comprises the installation of a He liquefier to run UCN experiments over long-term periods, the use of stainless steel neutron guides with improved transmission as well as sputter-coated non-magnetic 58^{58}NiMo alloy at the inside walls of the thermal bridge and the converter cup. The UCN yield was measured in a `standard' UCN storage bottle (stainless steel) with a volume of 32 litres outside the biological shield at the experimental area yielding UCN densities of 8.5 /cm3^3; an increase by a factor of 3.5 compared to the former setup. The measured UCN storage curve is in good agreement with the predictions from a Monte Carlo simulation developed to model the source. The growth and formation of the solid deuterium converter during freeze-out are affected by the ortho/para ratio of the H2_2 premoderator.Comment: 12 pages, 7 figure

    High field superconducting properties of Ba(Fe1-xCox)2As2 thin films

    Get PDF
    The film investigated grew phase-pure and highly textured with in-plane and out-of-plane full width at half maximum, FWHM, of = 0.74° and = 0.9°, Suppl. S1. The sample, however, does contain a large density of ab-planar defects, as revealed by transition electron microscope (TEM) images of focused ion beam (FIB) cuts near the microbridges, Fig. 1. These defects are presumably stacking faults (i.e. missing FeAs layers)20. The reason for this defect formation (also observed on technical substrates)21 is not fully understood. Possible reasons are a partial As loss during deposition22, and relaxation processes in combination with the Fe buffer layer23. Estimating the distance between these intergrowths leads to values varying between 5 and 10 nm. Between the planar defects, an orientation contrast is visible in TEM (inset Fig. 1b), i.e. the brighter crystallites are slightly rotated either around (010) (out-of-plane spread, ) or around (001) (in-plane spread, ) and enclosed by dislocation networks or small-angle GBs. Since the crystallites are sandwiched between planar defects, an in-plane misorientation is most likely. The out-of-plane misorientation, on the other hand, is visible as a slight tilt of the ab-planar defects with respect to each other, especially in the upper part of the sample. No globular or columnar precipitates were found
    corecore