38 research outputs found

    Assessing focus through ear-EEG: a comparative study between conventional cap EEG and mobile in- and around-the-ear EEG systems

    Get PDF
    IntroductionAs our attention is becoming a commodity that an ever-increasing number of applications are competing for, investing in modern day tools and devices that can detect our mental states and protect them from outside interruptions holds great value. Mental fatigue and distractions are impacting our ability to focus and can cause workplace injuries. Electroencephalography (EEG) may reflect concentration, and if EEG equipment became wearable and inconspicuous, innovative brain-computer interfaces (BCI) could be developed to monitor mental load in daily life situations. The purpose of this study is to investigate the potential of EEG recorded inside and around the human ear to determine levels of attention and focus.MethodsIn this study, mobile and wireless ear-EEG were concurrently recorded with conventional EEG (cap) systems to collect data during tasks related to focus: an N-back task to assess working memory and a mental arithmetic task to assess cognitive workload. The power spectral density (PSD) of the EEG signal was analyzed to isolate consistent differences between mental load conditions and classify epochs using step-wise linear discriminant analysis (swLDA).Results and discussionResults revealed that spectral features differed statistically between levels of cognitive load for both tasks. Classification algorithms were tested on spectral features from twelve and two selected channels, for the cap and the ear-EEG. A two-channel ear-EEG model evaluated the performance of two dry in-ear electrodes specifically. Single-trial classification for both tasks revealed above chance-level accuracies for all subjects, with mean accuracies of: 96% (cap-EEG) and 95% (ear-EEG) for the twelve-channel models, 76% (cap-EEG) and 74% (in-ear-EEG) for the two-channel model for the N-back task; and 82% (cap-EEG) and 85% (ear-EEG) for the twelve-channel, 70% (cap-EEG) and 69% (in-ear-EEG) for the two-channel model for the arithmetic task. These results suggest that neural oscillations recorded with ear-EEG can be used to reliably differentiate between levels of cognitive workload and working memory, in particular when multi-channel recordings are available, and could, in the near future, be integrated into wearable devices

    Does Fractional Anisotropy Predict Motor Imagery Neurofeedback Performance in Healthy Older Adults?

    Get PDF
    Motor imagery neurofeedback training has been proposed as a potential add-on therapy for motor impairment after stroke, but not everyone benefits from it. Previous work has used white matter integrity to predict motor imagery neurofeedback aptitude in healthy young adults. We set out to test this approach with motor imagery neurofeedback that is closer to that used for stroke rehabilitation and in a sample whose age is closer to that of typical stroke patients. Using shrinkage linear discriminant analysis with fractional anisotropy values in 48 white matter regions as predictors, we predicted whether each participant in a sample of 21 healthy older adults (48–77 years old) was a good or a bad performer with 84.8% accuracy. However, the regions used for prediction in our sample differed from those identified previously, and previously suggested regions did not yield significant prediction in our sample. Including demographic and cognitive variables which may correlate with motor imagery neurofeedback performance and white matter structure as candidate predictors revealed an association with age but also led to loss of statistical significance and somewhat poorer prediction accuracy (69.6%). Our results suggest cast doubt on the feasibility of predicting the benefit of motor imagery neurofeedback from fractional anisotropy. At the very least, such predictions should be based on data collected using the same paradigm and with subjects whose characteristics match those of the target case as closely as possible

    A Riemannian Modification of Artifact Subspace Reconstruction for EEG Artifact Handling

    Get PDF
    Artifact Subspace Reconstruction (ASR) is an adaptive method for the online or offline correction of artifacts comprising multichannel electroencephalography (EEG) recordings. It repeatedly computes a principal component analysis (PCA) on covariance matrices to detect artifacts based on their statistical properties in the component subspace. We adapted the existing ASR implementation by using Riemannian geometry for covariance matrix processing. EEG data that were recorded on smartphone in both outdoors and indoors conditions were used for evaluation (N = 27). A direct comparison between the original ASR and Riemannian ASR (rASR) was conducted for three performance measures: reduction of eye-blinks (sensitivity), improvement of visual-evoked potentials (VEPs) (specificity), and computation time (efficiency). Compared to ASR, our rASR algorithm performed favorably on all three measures. We conclude that rASR is suitable for the offline and online correction of multichannel EEG data acquired in laboratory and in field conditions

    Opportunities and Limitations of Mobile Neuroimaging Technologies in Educational Neuroscience.

    Get PDF
    Funder: European Association for Research on Learning and InstructionFunder: Jacobs Foundation; Id: http://dx.doi.org/10.13039/501100003986As the field of educational neuroscience continues to grow, questions have emerged regarding the ecological validity and applicability of this research to educational practice. Recent advances in mobile neuroimaging technologies have made it possible to conduct neuroscientific studies directly in naturalistic learning environments. We propose that embedding mobile neuroimaging research in a cycle (Matusz, Dikker, Huth, & Perrodin, 2019), involving lab-based, seminaturalistic, and fully naturalistic experiments, is well suited for addressing educational questions. With this review, we take a cautious approach, by discussing the valuable insights that can be gained from mobile neuroimaging technology, including electroencephalography and functional near-infrared spectroscopy, as well as the challenges posed by bringing neuroscientific methods into the classroom. Research paradigms used alongside mobile neuroimaging technology vary considerably. To illustrate this point, studies are discussed with increasingly naturalistic designs. We conclude with several ethical considerations that should be taken into account in this unique area of research

    Source-Modeling Auditory Processes of EEG Data Using EEGLAB and Brainstorm

    No full text
    Electroencephalography (EEG) source localization approaches are often used to disentangle the spatial patterns mixed up in scalp EEG recordings. However, approaches differ substantially between experiments, may be strongly parameter-dependent, and results are not necessarily meaningful. In this paper we provide a pipeline for EEG source estimation, from raw EEG data pre-processing using EEGLAB functions up to source-level analysis as implemented in Brainstorm. The pipeline is tested using a data set of 10 individuals performing an auditory attention task. The analysis approach estimates sources of 64-channel EEG data without the prerequisite of individual anatomies or individually digitized sensor positions. First, we show advanced EEG pre-processing using EEGLAB, which includes artifact attenuation using independent component analysis (ICA). ICA is a linear decomposition technique that aims to reveal the underlying statistical sources of mixed signals and is further a powerful tool to attenuate stereotypical artifacts (e.g., eye movements or heartbeat). Data submitted to ICA are pre-processed to facilitate good-quality decompositions. Aiming toward an objective approach on component identification, the semi-automatic CORRMAP algorithm is applied for the identification of components representing prominent and stereotypic artifacts. Second, we present a step-wise approach to estimate active sources of auditory cortex event-related processing, on a single subject level. The presented approach assumes that no individual anatomy is available and therefore the default anatomy ICBM152, as implemented in Brainstorm, is used for all individuals. Individual noise modeling in this dataset is based on the pre-stimulus baseline period. For EEG source modeling we use the OpenMEEG algorithm as the underlying forward model based on the symmetric Boundary Element Method (BEM). We then apply the method of dynamical statistical parametric mapping (dSPM) to obtain physiologically plausible EEG source estimates. Finally, we show how to perform group level analysis in the time domain on anatomically defined regions of interest (auditory scout). The proposed pipeline needs to be tailored to the specific datasets and paradigms. However, the straightforward combination of EEGLAB and Brainstorm analysis tools may be of interest to others performing EEG source localization

    The Sensitivity of Ear-EEG: Evaluating the Source-Sensor Relationship Using Forward Modeling

    No full text
    Ear-EEG allows to record brain activity in every-day life, for example to study natural behaviour or unhindered social interactions. Compared to conventional scalp-EEG, ear-EEG uses fewer electrodes and covers only a small part of the head. Consequently, ear-EEG will be less sensitive to some cortical sources. Here, we perform realistic electromagnetic simulations to compare cEEGrid ear-EEG with 128-channel cap-EEG. We compute the sensitivity of ear-EEG for different cortical sources, and quantify the expected signal loss of ear-EEG relative to cap-EEG. Our results show that ear-EEG is most sensitive to sources in the temporal cortex. Furthermore, we show how ear-EEG benefits from a multi-channel configuration (i.e. cEEGrid). The pipelines presented here can be adapted to any arrangement of electrodes and can therefore provide an estimate of sensitivity to cortical regions, thereby increasing the chance of successful experiments using ear-EEG

    Target speaker detection with concealed EEG around the ear

    Get PDF
    Target speaker identification is essential for speech enhancement algorithms in assistive devices aimed toward helping the hearing impaired. Several recent studies have reported that target speaker identification is possible through electroencephalography (EEG) recordings. If the EEG system could be reduced to acceptable size while retaining the signal quality, hearing aids could benefit from the integration with concealed EEG. To compare the performance of a multichannel around-the-ear EEG system with high-density cap EEG recordings an envelope tracking algorithm was applied in a competitive speaker paradigm. The data from 20 normal hearing listeners were concurrently collected from the traditional state-of-the-art laboratory wired EEG system and a wireless mobile EEG system with two bilaterally-placed around-the-ear electrode arrays (cEEGrids). The results show that the cEEGrid ear-EEG technology captured neural signals that allowed the identification of the attended speaker above chance-level, with 69.3% accuracy, while cap-EEG signals resulted in the accuracy of 84.8%. Further analyses investigated the influence of ear-EEG signal quality and revealed that the envelope tracking procedure was unaffected by variability in channel impedances. We conclude that the quality of concealed ear-EEG recordings as acquired with the cEEGrid array has potential to be used in the brain-computer interface steering of hearing aids
    corecore