9 research outputs found

    FACT: Learning Governing Abstractions Behind Integer Sequences

    Full text link
    Integer sequences are of central importance to the modeling of concepts admitting complete finitary descriptions. We introduce a novel view on the learning of such concepts and lay down a set of benchmarking tasks aimed at conceptual understanding by machine learning models. These tasks indirectly assess model ability to abstract, and challenge them to reason both interpolatively and extrapolatively from the knowledge gained by observing representative examples. To further aid research in knowledge representation and reasoning, we present FACT, the Finitary Abstraction Comprehension Toolkit. The toolkit surrounds a large dataset of integer sequences comprising both organic and synthetic entries, a library for data pre-processing and generation, a set of model performance evaluation tools, and a collection of baseline model implementations, enabling the making of the future advancements with ease.Comment: Accepted to the 36th Conference on Neural Information Processing Systems (NeurIPS 2022) Track on Datasets and Benchmarks. 37 page

    Electrode clustering and bandpass analysis of eeg data for gaze estimation

    Get PDF
    In this study, we validate the findings of previously published papers, showing the feasibility of an Electroencephalography (EEG) based gaze estimation. Moreover, we extend previous research by demonstrating that with only a slight drop in model performance, we can significantly reduce the number of electrodes, indicating that a high-density, expensive EEG cap is not necessary for the purposes of EEG-based eye tracking. Using data-driven approaches, we establish which electrode clusters impact gaze estimation and how the different types of EEG data preprocessing affect the models’ performance. Finally, we also inspect which recorded frequencies are most important for the defined tasks

    Predicting Gaze Position with Deep Learning of Electroencephalography Data

    Get PDF
    The collection of eye gaze information is widely used in cognitive science and psychology. Moreover, many neuroscientific studies complement neuroimaging methods with eye-tracking technology to identify variations in attention, arousal and the participant's compliance with the task demands. To address limitations with conventional eye-tracking systems, recent studies have focused on leveraging advanced machine-learning techniques to compute gaze based on images from a webcam or images from Functional Magnetic Resonance Imaging (fMRI). While using a webcam to specify the eye gaze position requires an additional system and synchronization with auxiliary measures from the actual experiment that is even more cumbersome than in traditional eye-tracking systems, fMRI data acquisition is costly and does not provide the temporal resolution at the level that cognition takes place. In contrast, Electroencephalography (EEG) is a safe and cost-friendly method that directly measures the brain's electrical activity and enables measurement in clinical settings. However, an eye-tracking approach that offers gaze position estimation based on concurrently measured EEG is lacking. We address this shortcoming and show that gaze position can be restored by combining EEG activity and state-of-the-art machine learning. We use a dataset consisting of recordings from 400 healthy participants while they engage in tasks with varying complexity levels resulting in EEG and EOG features for over 3 million gaze fixations. To address intersubject variability and different experimental setups, we introduced a calibration paradigm, allowing the trained model to represent each participant's fixation characteristics throughout the experiment efficiently. Including a standardized, time-efficient and straightforward protocol to calibrate future recorded data on the pre-trained algorithm will improve the model's sensitivity, accuracy and versatility. This work emphasizes the importance of eye-tracking for the interpretation of EEG results and provides an open-source software that is widely applicable in research and clinical settings

    An Interpretable Attention-based Method for Gaze Estimation Using Electroencephalography

    Get PDF
    Eye movements can reveal valuable insights into various aspects of human mental processes, physical well-being, and actions. Recently, several datasets have been made available that simultaneously record EEG activity and eye movements. This has triggered the development of various methods to predict gaze direction based on brain activity. However, most of these methods lack interpretability, which limits their technology acceptance. In this paper, we leverage a large data set of simultaneously measured Electroencephalography (EEG) and Eye track- ing, proposing an interpretable model for gaze estimation from EEG data. More specifically, we present a novel attention-based deep learning framework for EEG signal analysis, which allows the network to focus on the most relevant information in the signal and discard problematic channels. Additionally, we provide a comprehensive evaluation of the presented framework, demonstrating its superiority over current methods in terms of accuracy and robustness. Finally, the study presents visualizations that explain the results of the analysis and highlights the potential of attention mechanism for improving the efficiency and effectiveness of EEG data analysis in a variety of applications

    Using Deep Learning to Classify Saccade Direction from Brain Activity

    No full text
    We present first insights into our project that aims to develop an Electroencephalography (EEG) based Eye-Tracker. Our approach is tested and validated on a large dataset of simultaneously recorded EEG and infrared video-based Eye-Tracking, serving as ground truth. We compared several state-of-the-art neural network architectures for time series classification: InceptionTime, EEGNet, and investigated other architectures such as convolutional neural networks (CNN) with Xception modules and Pyramidal CNN. We prepared and tested these architectures with our rich dataset and obtained a remarkable accuracy of the left/right saccades direction classification (94.8 %) for the InceptionTime network, after hyperparameter tuning

    Using Deep Learning to Classify Saccade Direction from Brain Activity

    No full text
    We present first insights into our project that aims to develop an Electroencephalography (EEG) based Eye-Tracker. Our approach is tested and validated on a large dataset of simultaneously recorded EEG and infrared video-based Eye-Tracking, serving as ground truth. We compared several state-of-the-art neural network architectures for time series classification: InceptionTime, EEGNet, and investigated other architectures such as convolutional neural networks (CNN) with Xception modules and Pyramidal CNN. We prepared and tested these architectures with our rich dataset and obtained a remarkable accuracy of the left/right saccades direction classification (94.8 %) for the InceptionTime network, after hyperparameter tuning

    A Deep Learning Approach for the Segmentation of Electroencephalography Data in Eye Tracking Applications

    Full text link
    The collection of eye gaze information provides a window into many critical aspects of human cognition, health and behaviour. Additionally, many neuroscientific studies complement the behavioural information gained from eye tracking with the high temporal resolution and neurophysiological markers provided by electroencephalography (EEG). One of the essential eye-tracking software processing steps is the segmentation of the continuous data stream into events relevant to eye-tracking applications, such as saccades, fixations, and blinks. Here, we introduce DETRtime, a novel framework for time-series segmentation that creates ocular event detectors that do not require additionally recorded eye-tracking modality and rely solely on EEG data. Our end-to-end deep learning-based framework brings recent advances in Computer Vision to the forefront of the times series segmentation of EEG data. DETRtime achieves state-of-the-art performance in ocular event detection across diverse eye-tracking experiment paradigms. In addition to that, we provide evidence that our model generalizes well in the task of EEG sleep stage segmentation
    corecore