23 research outputs found

    Preparing Laboratory and Real-World EEG Data for Large-Scale Analysis: A Containerized Approach.

    Get PDF
    Large-scale analysis of EEG and other physiological measures promises new insights into brain processes and more accurate and robust brain-computer interface models. However, the absence of standardized vocabularies for annotating events in a machine understandable manner, the welter of collection-specific data organizations, the difficulty in moving data across processing platforms, and the unavailability of agreed-upon standards for preprocessing have prevented large-scale analyses of EEG. Here we describe a "containerized" approach and freely available tools we have developed to facilitate the process of annotating, packaging, and preprocessing EEG data collections to enable data sharing, archiving, large-scale machine learning/data mining and (meta-)analysis. The EEG Study Schema (ESS) comprises three data "Levels," each with its own XML-document schema and file/folder convention, plus a standardized (PREP) pipeline to move raw (Data Level 1) data to a basic preprocessed state (Data Level 2) suitable for application of a large class of EEG analysis methods. Researchers can ship a study as a single unit and operate on its data using a standardized interface. ESS does not require a central database and provides all the metadata data necessary to execute a wide variety of EEG processing pipelines. The primary focus of ESS is automated in-depth analysis and meta-analysis EEG studies. However, ESS can also encapsulate meta-information for the other modalities such as eye tracking, that are increasingly used in both laboratory and real-world neuroimaging. ESS schema and tools are freely available at www.eegstudy.org and a central catalog of over 850 GB of existing data in ESS format is available at studycatalog.org. These tools and resources are part of a larger effort to enable data sharing at sufficient scale for researchers to engage in truly large-scale EEG analysis and data mining (BigEEG.org)

    Offline EEG-based driver drowsiness estimation using enhanced batch-mode active learning (EBMAL) for regression

    Full text link
    © 2016 IEEE. There are many important regression problems in real-world brain-computer interface (BCI) applications, e.g., driver drowsiness estimation from EEG signals. This paper considers offline analysis: given a pool of unlabeled EEG epochs recorded during driving, how do we optimally select a small number of them to label so that an accurate regression model can be built from them to label the rest? Active learning is a promising solution to this problem, but interestingly, to our best knowledge, it has not been used for regression problems in BCI so far. This paper proposes a novel enhanced batch-mode active learning (EBMAL) approach for regression, which improves upon a baseline active learning algorithm by increasing the reliability, representativeness and diversity of the selected samples to achieve better regression performance. We validate its effectiveness using driver drowsiness estimation from EEG signals. However, EBMAL is a general approach that can also be applied to many other offline regression problems beyond BCI

    True zero-training brain-computer interfacing: an online study

    Get PDF
    Despite several approaches to realize subject-to-subject transfer of pre-trained classifiers, the full performance of a Brain-Computer Interface (BCI) for a novel user can only be reached by presenting the BCI system with data from the novel user. In typical state-of-the-art BCI systems with a supervised classifier, the labeled data is collected during a calibration recording, in which the user is asked to perform a specific task. Based on the known labels of this recording, the BCI's classifier can learn to decode the individual's brain signals. Unfortunately, this calibration recording consumes valuable time. Furthermore, it is unproductive with respect to the final BCI application, e.g. text entry. Therefore, the calibration period must be reduced to a minimum, which is especially important for patients with a limited concentration ability. The main contribution of this manuscript is an online study on unsupervised learning in an auditory event-related potential (ERP) paradigm. Our results demonstrate that the calibration recording can be bypassed by utilizing an unsupervised trained classifier, that is initialized randomly and updated during usage. Initially, the unsupervised classifier tends to make decoding mistakes, as the classifier might not have seen enough data to build a reliable model. Using a constant re-analysis of the previously spelled symbols, these initially misspelled symbols can be rectified posthoc when the classifier has learned to decode the signals. We compare the spelling performance of our unsupervised approach and of the unsupervised posthoc approach to the standard supervised calibration-based dogma for n = 10 healthy users. To assess the learning behavior of our approach, it is unsupervised trained from scratch three times per user. Even with the relatively low SNR of an auditory ERP paradigm, the results show that after a limited number of trials (30 trials), the unsupervised approach performs comparably to a classic supervised model

    Multisubject Learning for Common Spatial Patterns in Motor-Imagery BCI

    Get PDF
    Motor-imagery-based brain-computer interfaces (BCIs) commonly use the common spatial pattern filter (CSP) as preprocessing step before feature extraction and classification. The CSP method is a supervised algorithm and therefore needs subject-specific training data for calibration, which is very time consuming to collect. In order to reduce the amount of calibration data that is needed for a new subject, one can apply multitask (from now on called multisubject) machine learning techniques to the preprocessing phase. Here, the goal of multisubject learning is to learn a spatial filter for a new subject based on its own data and that of other subjects. This paper outlines the details of the multitask CSP algorithm and shows results on two data sets. In certain subjects a clear improvement can be seen, especially when the number of training trials is relatively low

    Decoding Working-Memory Load During n-Back Task Performance from High Channel NIRS Data

    Full text link
    Near-infrared spectroscopy (NIRS) can measure neural activity through blood oxygenation changes in the brain in a wearable form factor, enabling unique applications for research in and outside the lab. NIRS has proven capable of measuring cognitive states such as mental workload, often using machine learning (ML) based brain-computer interfaces (BCIs). To date, NIRS research has largely relied on probes with under ten to several hundred channels, although recently a new class of wearable NIRS devices with thousands of channels has emerged. This poses unique challenges for ML classification, as NIRS is typically limited by few training trials which results in severely under-determined estimation problems. So far, it is not well understood how such high-resolution data is best leveraged in practical BCIs and whether state-of-the-art (SotA) or better performance can be achieved. To address these questions, we propose an ML strategy to classify working-memory load that relies on spatio-temporal regularization and transfer learning from other subjects in a combination that has not been used in previous NIRS BCIs. The approach can be interpreted as an end-to-end generalized linear model and allows for a high degree of interpretability using channel-level or cortical imaging approaches. We show that using the proposed methodology, it is possible to achieve SotA decoding performance with high-resolution NIRS data. We also replicated several SotA approaches on our dataset of 43 participants wearing a 3198 dual-channel NIRS device while performing the n-Back task and show that these existing methods struggle in the high-channel regime and are largely outperformed by the proposed method. Our approach helps establish high-channel NIRS devices as a viable platform for SotA BCI and opens new applications using this class of headset while also enabling high-resolution model imaging and interpretation.Comment: 29 pages, 9 figures. Under revie

    Decoding Brain Activation from Ipsilateral Cortex using ECoG Signals in Humans

    Get PDF
    Today, learning from the brain is the most challenging issue in many areas. Neural scientists, computer scientists, and engineers are collaborating in this broad research area. With better techniques, we can extract the brain signals by either non-invasive approach such as EEG: electroencephalography), fMRI, or invasive method such as ECoG: electrocorticography), FP: field potential) and signals from single unit. The challenge is, given the brain signals, how can we possibly decipher them? Brain Computer Interfaces, or BCIs, aim at utilizing the brain signals to control prothetic arms or operate devices. Previously almost all the research on BCIs focuses on decoding signals from the contralateral hemisphere to implement BCI systems. However, the loss of functionality in the contralateral cortex often occurs due to strokes, resulting in total failure to motor function of fingers, hands, and limbs contralateral to the damaged hemisphere. Recent studies indicate that the signals from ipsilateral cortex is relevant to the planning phase of motor movements. Therefore, it is critical to find out if human motor movements can be decoded using signals from the ipsilateral cortex. In the thesis, we propose using ECoG signals from the ipsilateral cortex to decode finger movements. To our knowledge, this is the first work that successfully detects finger movements using signals from the ipsilateral cortex. We also investigate the experiment design and decoding directional movements. Our results show high decoding performance. We also show the anatomical feature analysis for ipsilateral cortex in performing motor-associated tasks, and the features are consistent with previous findings. The result reveals promising implications for a stroke relevant BCI
    corecore