3 research outputs found

    Classifying Numbers from EEG Data – Which Neural Network Architecture Performs Best?

    Get PDF
    This paper presents a comparison of deep learning models for classifying P300 events, i.e., event-related potentials of the brain triggered during the human decision-making process. The evaluated models include CNN, (Bi | Deep | CNN-) LSTM, ConvLSTM, LSTM + Attention. The experiments were based on a large publicly available EEG dataset of school-age children conducting the “Guess the number”-experiment. Several hyperparameter choices were experimentally investigated resulting in 30 different models included in the comparison. Ten models with good performance on the validation data set were also automatically optimized with Grid Search. Monte Carlo Cross Validation was used to test all models on test data with 30 iterations. The best performing model was the Deep LSTM with an accuracy of 77.1% followed by the baseline (CNN) 76.1%. The significance test using a 5x2 cross validation paired t-test demonstrated that no model was significantly better than the baseline. We recommend experimenting with other architectures such as Inception, ResNet and Graph Convolutional Network

    Deep Learning on VR-Induced Attention

    Get PDF
    Some evidence suggests that virtual reality (VR) approaches may lead to a greater attentional focus than experiencing the same scenarios presented on computer monitors. The aim of this study is to differentiate attention levels captured during a perceptual discrimination task presented on two different viewing platforms, standard personal computer (PC) monitor and head-mounted-display (HMD)-VR, using a well-described electroencephalography (EEG)-based measure (parietal P3b latency) and deep learning-based measure (that is EEG features extracted by a compact convolutional neural network-EEGNet and visualized by a gradient-based relevance attribution method-DeepLIFT). Twenty healthy young adults participated in this perceptual discrimination task in which according to a spatial cue they were required to discriminate either a "Target" or "Distractor" stimuli on the screen of viewing platforms. Experimental results show that the EEGNet-based classification accuracies are highly correlated with the p values of statistical analysis of P3b. Also, the visualized EEG features are neurophysiologically interpretable. This study provides the first visualized deep learning-based EEG features captured during an HMD-VR-based attentional tas

    Interpretable Convolutional Neural Networks for Decoding and Analyzing Neural Time Series Data

    Get PDF
    Machine learning is widely adopted to decode multi-variate neural time series, including electroencephalographic (EEG) and single-cell recordings. Recent solutions based on deep learning (DL) outperformed traditional decoders by automatically extracting relevant discriminative features from raw or minimally pre-processed signals. Convolutional Neural Networks (CNNs) have been successfully applied to EEG and are the most common DL-based EEG decoders in the state-of-the-art (SOA). However, the current research is affected by some limitations. SOA CNNs for EEG decoding usually exploit deep and heavy structures with the risk of overfitting small datasets, and architectures are often defined empirically. Furthermore, CNNs are mainly validated by designing within-subject decoders. Crucially, the automatically learned features mainly remain unexplored; conversely, interpreting these features may be of great value to use decoders also as analysis tools, highlighting neural signatures underlying the different decoded brain or behavioral states in a data-driven way. Lastly, SOA DL-based algorithms used to decode single-cell recordings rely on more complex, slower to train and less interpretable networks than CNNs, and the use of CNNs with these signals has not been investigated. This PhD research addresses the previous limitations, with reference to P300 and motor decoding from EEG, and motor decoding from single-neuron activity. CNNs were designed light, compact, and interpretable. Moreover, multiple training strategies were adopted, including transfer learning, which could reduce training times promoting the application of CNNs in practice. Furthermore, CNN-based EEG analyses were proposed to study neural features in the spatial, temporal and frequency domains, and proved to better highlight and enhance relevant neural features related to P300 and motor states than canonical EEG analyses. Remarkably, these analyses could be used, in perspective, to design novel EEG biomarkers for neurological or neurodevelopmental disorders. Lastly, CNNs were developed to decode single-neuron activity, providing a better compromise between performance and model complexity
    corecore