39,308 research outputs found
Integrating EEG and MEG signals to improve motor imagery classification in brain-computer interfaces
We propose a fusion approach that combines features from simultaneously
recorded electroencephalographic (EEG) and magnetoencephalographic (MEG)
signals to improve classification performances in motor imagery-based
brain-computer interfaces (BCIs). We applied our approach to a group of 15
healthy subjects and found a significant classification performance enhancement
as compared to standard single-modality approaches in the alpha and beta bands.
Taken together, our findings demonstrate the advantage of considering
multimodal approaches as complementary tools for improving the impact of
non-invasive BCIs
Learning Representations from EEG with Deep Recurrent-Convolutional Neural Networks
One of the challenges in modeling cognitive events from electroencephalogram
(EEG) data is finding representations that are invariant to inter- and
intra-subject differences, as well as to inherent noise associated with such
data. Herein, we propose a novel approach for learning such representations
from multi-channel EEG time-series, and demonstrate its advantages in the
context of mental load classification task. First, we transform EEG activities
into a sequence of topology-preserving multi-spectral images, as opposed to
standard EEG analysis techniques that ignore such spatial information. Next, we
train a deep recurrent-convolutional network inspired by state-of-the-art video
classification to learn robust representations from the sequence of images. The
proposed approach is designed to preserve the spatial, spectral, and temporal
structure of EEG which leads to finding features that are less sensitive to
variations and distortions within each dimension. Empirical evaluation on the
cognitive load classification task demonstrated significant improvements in
classification accuracy over current state-of-the-art approaches in this field.Comment: To be published as a conference paper at ICLR 201
A Classification Model for Sensing Human Trust in Machines Using EEG and GSR
Today, intelligent machines \emph{interact and collaborate} with humans in a
way that demands a greater level of trust between human and machine. A first
step towards building intelligent machines that are capable of building and
maintaining trust with humans is the design of a sensor that will enable
machines to estimate human trust level in real-time. In this paper, two
approaches for developing classifier-based empirical trust sensor models are
presented that specifically use electroencephalography (EEG) and galvanic skin
response (GSR) measurements. Human subject data collected from 45 participants
is used for feature extraction, feature selection, classifier training, and
model validation. The first approach considers a general set of
psychophysiological features across all participants as the input variables and
trains a classifier-based model for each participant, resulting in a trust
sensor model based on the general feature set (i.e., a "general trust sensor
model"). The second approach considers a customized feature set for each
individual and trains a classifier-based model using that feature set,
resulting in improved mean accuracy but at the expense of an increase in
training time. This work represents the first use of real-time
psychophysiological measurements for the development of a human trust sensor.
Implications of the work, in the context of trust management algorithm design
for intelligent machines, are also discussed.Comment: 20 page
Data Augmentation for Deep-Learning-Based Electroencephalography
Background: Data augmentation (DA) has recently been demonstrated to achieve considerable performance gains for deep learning (DL)—increased accuracy and stability and reduced overfitting. Some electroencephalography (EEG) tasks suffer from low samples-to-features ratio, severely reducing DL effectiveness. DA with DL thus holds transformative promise for EEG processing, possibly like DL revolutionized computer vision, etc.
New method: We review trends and approaches to DA for DL in EEG to address: Which DA approaches exist and are common for which EEG tasks? What input features are used? And, what kind of accuracy gain can be expected?
Results: DA for DL on EEG begun 5 years ago and is steadily used more. We grouped DA techniques (noise addition, generative adversarial networks, sliding windows, sampling, Fourier transform, recombination of segmentation, and others) and EEG tasks (into seizure detection, sleep stages, motor imagery, mental workload, emotion recognition, motor tasks, and visual tasks). DA efficacy across techniques varied considerably. Noise addition and sliding windows provided the highest accuracy boost; mental workload most benefitted from DA. Sliding window, noise addition, and sampling methods most common for seizure detection, mental workload, and sleep stages, respectively.
Comparing with existing methods: Percent of decoding accuracy explained by DA beyond unaugmented accuracy varied between 8% for recombination of segmentation and 36% for noise addition and from 14% for motor imagery to 56% for mental workload—29% on average.
Conclusions: DA increasingly used and considerably improved DL decoding accuracy on EEG. Additional publications—if adhering to our reporting guidelines—will facilitate more detailed analysis
Data Augmentation for Deep-Learning-Based Electroencephalography
Background: Data augmentation (DA) has recently been demonstrated to achieve considerable performance gains for deep learning (DL)—increased accuracy and stability and reduced overfitting. Some electroencephalography (EEG) tasks suffer from low samples-to-features ratio, severely reducing DL effectiveness. DA with DL thus holds transformative promise for EEG processing, possibly like DL revolutionized computer vision, etc.
New method: We review trends and approaches to DA for DL in EEG to address: Which DA approaches exist and are common for which EEG tasks? What input features are used? And, what kind of accuracy gain can be expected?
Results: DA for DL on EEG begun 5 years ago and is steadily used more. We grouped DA techniques (noise addition, generative adversarial networks, sliding windows, sampling, Fourier transform, recombination of segmentation, and others) and EEG tasks (into seizure detection, sleep stages, motor imagery, mental workload, emotion recognition, motor tasks, and visual tasks). DA efficacy across techniques varied considerably. Noise addition and sliding windows provided the highest accuracy boost; mental workload most benefitted from DA. Sliding window, noise addition, and sampling methods most common for seizure detection, mental workload, and sleep stages, respectively.
Comparing with existing methods: Percent of decoding accuracy explained by DA beyond unaugmented accuracy varied between 8% for recombination of segmentation and 36% for noise addition and from 14% for motor imagery to 56% for mental workload—29% on average.
Conclusions: DA increasingly used and considerably improved DL decoding accuracy on EEG. Additional publications—if adhering to our reporting guidelines—will facilitate more detailed analysis
Breaking Down the Barriers To Operator Workload Estimation: Advancing Algorithmic Handling of Temporal Non-Stationarity and Cross-Participant Differences for EEG Analysis Using Deep Learning
This research focuses on two barriers to using EEG data for workload assessment: day-to-day variability, and cross- participant applicability. Several signal processing techniques and deep learning approaches are evaluated in multi-task environments. These methods account for temporal, spatial, and frequential data dependencies. Variance of frequency- domain power distributions for cross-day workload classification is statistically significant. Skewness and kurtosis are not significant in an environment absent workload transitions, but are salient with transitions present. LSTMs improve day- to-day feature stationarity, decreasing error by 59% compared to previous best results. A multi-path convolutional recurrent model using bi-directional, residual recurrent layers significantly increases predictive accuracy and decreases cross-participant variance. Deep learning regression approaches are applied to a multi-task environment with workload transitions. Accounting for temporal dependence significantly reduces error and increases correlation compared to baselines. Visualization techniques for LSTM feature saliency are developed to understand EEG analysis model biases
Space Warps: I. Crowd-sourcing the Discovery of Gravitational Lenses
We describe Space Warps, a novel gravitational lens discovery service that
yields samples of high purity and completeness through crowd-sourced visual
inspection. Carefully produced colour composite images are displayed to
volunteers via a web- based classification interface, which records their
estimates of the positions of candidate lensed features. Images of simulated
lenses, as well as real images which lack lenses, are inserted into the image
stream at random intervals; this training set is used to give the volunteers
instantaneous feedback on their performance, as well as to calibrate a model of
the system that provides dynamical updates to the probability that a classified
image contains a lens. Low probability systems are retired from the site
periodically, concentrating the sample towards a set of lens candidates. Having
divided 160 square degrees of Canada-France-Hawaii Telescope Legacy Survey
(CFHTLS) imaging into some 430,000 overlapping 82 by 82 arcsecond tiles and
displaying them on the site, we were joined by around 37,000 volunteers who
contributed 11 million image classifications over the course of 8 months. This
Stage 1 search reduced the sample to 3381 images containing candidates; these
were then refined in Stage 2 to yield a sample that we expect to be over 90%
complete and 30% pure, based on our analysis of the volunteers performance on
training images. We comment on the scalability of the SpaceWarps system to the
wide field survey era, based on our projection that searches of 10 images
could be performed by a crowd of 10 volunteers in 6 days.Comment: 21 pages, 13 figures, MNRAS accepted, minor to moderate changes in
this versio
Decoding working memory-related information from repeated psychophysiological EEG experiments using convolutional and contrastive neural networks
Objective. Extracting reliable information from electroencephalogram (EEG) is difficult because the low signal-to-noise ratio and significant intersubject variability seriously hinder statistical analyses. However, recent advances in explainable machine learning open a new strategy to address this problem. Approach. The current study evaluates this approach using results from the classification and decoding of electrical brain activity associated with information retention. We designed four neural network models differing in architecture, training strategies, and input representation to classify single experimental trials of a working memory task. Main results. Our best models achieved an accuracy (ACC) of 65.29 ± 0.76 and Matthews correlation coefficient of 0.288 ± 0.018, outperforming the reference model trained on the same data. The highest correlation between classification score and behavioral performance was 0.36 (p = 0.0007). Using analysis of input perturbation, we estimated the importance of EEG channels and frequency bands in the task at hand. The set of essential features identified for each network varies. We identified a subset of features common to all models that identified brain regions and frequency bands consistent with current neurophysiological knowledge of the processes critical to attention and working memory. Finally, we proposed sanity checks to examine further the robustness of each model's set of features. Significance. Our results indicate that explainable deep learning is a powerful tool for decoding information from EEG signals. It is crucial to train and analyze a range of models to identify stable and reliable features. Our results highlight the need for explainable modeling as the model with the highest ACC appeared to use residual artifactual activity
- …