34 research outputs found

    Cross-Participant EEG-Based Assessment of Cognitive Workload Using Multi-Path Convolutional Recurrent Neural Networks

    Get PDF
    Applying deep learning methods to electroencephalograph (EEG) data for cognitive state assessment has yielded improvements over previous modeling methods. However, research focused on cross-participant cognitive workload modeling using these techniques is underrepresented. We study the problem of cross-participant state estimation in a non-stimulus-locked task environment, where a trained model is used to make workload estimates on a new participant who is not represented in the training set. Using experimental data from the Multi-Attribute Task Battery (MATB) environment, a variety of deep neural network models are evaluated in the trade-space of computational efficiency, model accuracy, variance and temporal specificity yielding three important contributions: (1) The performance of ensembles of individually-trained models is statistically indistinguishable from group-trained methods at most sequence lengths. These ensembles can be trained for a fraction of the computational cost compared to group-trained methods and enable simpler model updates. (2) While increasing temporal sequence length improves mean accuracy, it is not sufficient to overcome distributional dissimilarities between individuals’ EEG data, as it results in statistically significant increases in cross-participant variance. (3) Compared to all other networks evaluated, a novel convolutional-recurrent model using multi-path subnetworks and bi-directional, residual recurrent layers resulted in statistically significant increases in predictive accuracy and decreases in cross-participant variance

    Advanced Biometrics with Deep Learning

    Get PDF
    Biometrics, such as fingerprint, iris, face, hand print, hand vein, speech and gait recognition, etc., as a means of identity management have become commonplace nowadays for various applications. Biometric systems follow a typical pipeline, that is composed of separate preprocessing, feature extraction and classification. Deep learning as a data-driven representation learning approach has been shown to be a promising alternative to conventional data-agnostic and handcrafted pre-processing and feature extraction for biometric systems. Furthermore, deep learning offers an end-to-end learning paradigm to unify preprocessing, feature extraction, and recognition, based solely on biometric data. This Special Issue has collected 12 high-quality, state-of-the-art research papers that deal with challenging issues in advanced biometric systems based on deep learning. The 12 papers can be divided into 4 categories according to biometric modality; namely, face biometrics, medical electronic signals (EEG and ECG), voice print, and others

    Breaking Down the Barriers To Operator Workload Estimation: Advancing Algorithmic Handling of Temporal Non-Stationarity and Cross-Participant Differences for EEG Analysis Using Deep Learning

    Get PDF
    This research focuses on two barriers to using EEG data for workload assessment: day-to-day variability, and cross- participant applicability. Several signal processing techniques and deep learning approaches are evaluated in multi-task environments. These methods account for temporal, spatial, and frequential data dependencies. Variance of frequency- domain power distributions for cross-day workload classification is statistically significant. Skewness and kurtosis are not significant in an environment absent workload transitions, but are salient with transitions present. LSTMs improve day- to-day feature stationarity, decreasing error by 59% compared to previous best results. A multi-path convolutional recurrent model using bi-directional, residual recurrent layers significantly increases predictive accuracy and decreases cross-participant variance. Deep learning regression approaches are applied to a multi-task environment with workload transitions. Accounting for temporal dependence significantly reduces error and increases correlation compared to baselines. Visualization techniques for LSTM feature saliency are developed to understand EEG analysis model biases

    Predicting Humans’ Identity and Mental Load from EEG: Performed by AI

    Full text link
    EEG-based brain machine/computer interfaces (BMIs/BCIs) have a wide range of clinical and non-clinical applications. Mental workload (MW) classification, emotion recognition, motor imagery, seizure detection, and sleep stage scoring are among the active BCI research areas. One of the relatively new BCI area is EEG-based human subject recognition (i.e., EEG biometric). There still exist several challenges that need to be addressed to design a successful EEG-based biometric model applicable for real-world environments. First, there is a need for a protocol that can elicit the individual dependent EEG responses in a short period of time. A classification algorithm with high generalization power is also required to deal with the EEG signals classification task. The latter is a common challenge for all EEG-based BCI paradigms; given the non-stationary nature of the EEG signals and the small size of the EEG datasets. In addition, to building a stable EEG biometric model, the effects of human mental states (e.g., emotion, mental load) on the model performance needs to be carefully examined. In this thesis, a new protocol for the area of the EEG biometric has been proposed. The proposed protocol called “(the) N-back task” is based on the human working memory and the experimental results obtained in this thesis prove that the EEG signals elicited by the N-back task contain subject specific features, even for very short time intervals. It has also been shown that three load levels of the typical N-back task are all capable of evoking subject specific EEG features. As a result, the N-back task can be used as a protocol having more than one mode (i.e, cancelable protocol) that comes with added security benefits. The EEG signals evoked by the N-back task have been used to train a compact convolutional neural network called the EEGNet. A configuration of the EEGNet having 16 temporal and 2 spatial filters has reached an identification accuracy of approximately 97% using data instances as short as 1.1s for a pool of 26 subjects. To further improve the accuracy, a novel ensemble classifier has been designed in this thesis. The principle underlying the proposed ensemble is the “division and exclusion” of the EEG channels guided by scalp locations. The ensemble classifier has (statistically significantly) improved the subject recognition rate from 97% to 99%. Performance of the proposed ensemble model has also been assessed in the EEG-based MW classification paradigm. The ensemble classifier outperformed the single EEGNet as well as a state-of-the-art classifier called WLnet in the challenging scenario of the subject-independent (cross-subject) MW classification. The results suggest that the ensemble structure proposed in this thesis can generalize to different BCI paradigms. Finally, effects of the mental workload on the performance of the EEG-based subject authentication models have been thoroughly explored in this thesis. The obtained results affirm that MW of the genuine and impostor subjects at the train and test phases have significant effects on both false negative rate (FNR) and false positive rate (FPR) of an authentication system. Different subjects have also shown different clusters of authentication behaviors when affected by the MW changes. This finding establishes the importance of the human’s mental load in the design of real-world EEG authentication systems and introduces a new investigation line for the EEG biometric community

    EEG-based emotion recognition using tunable Q wavelet transform and rotation forest ensemble classifier

    Get PDF
    Emotion recognition by artificial intelligence (AI) is a challenging task. A wide variety of research has been done, which demonstrated the utility of audio, imagery, and electroencephalography (EEG) data for automatic emotion recognition. This paper presents a new automated emotion recognition framework, which utilizes electroencephalography (EEG) signals. The proposed method is lightweight, and it consists of four major phases, which include: a reprocessing phase, a feature extraction phase, a feature dimension reduction phase, and a classification phase. A discrete wavelet transforms (DWT) based noise reduction method, which is hereby named multi scale principal component analysis (MSPCA), is utilized during the pre-processing phase, where a Symlets-4 filter is utilized for noise reduction. A tunable Q wavelet transform (TQWT) is utilized as feature extractor. Six different statistical methods are used for dimension reduction. In the classification step, rotation forest ensemble (RFE) classifier is utilized with different classification algorithms such as k-Nearest Neighbor (k-NN), support vector machine (SVM), artificial neural network (ANN), random forest (RF), and four different types of the decision tree (DT) algorithms. The proposed framework achieves over 93 % classification accuracy with RFE + SVM. The results clearly show that the proposed TQWT and RFE based emotion recognition framework is an effective approach for emotion recognition using EEG signals.</p
    corecore