40,038 research outputs found

    Transfer learning with large-scale data in brain-computer interfaces

    Full text link
    © 2016 IEEE. Human variability in electroencephalogram (EEG) poses significant challenges for developing practical real-world applications of brain-computer interfaces (BCIs). The intuitive solution of collecting sufficient user-specific training/calibration data can be very labor-intensive and time-consuming, hindering the practicability of BCIs. To address this problem, transfer learning (TL), which leverages existing data from other sessions or subjects, has recently been adopted by the BCI community to build a BCI for a new user with limited calibration data. However, current TL approaches still require training/calibration data from each of conditions, which might be difficult or expensive to obtain. This study proposed a novel TL framework that could nearly eliminate requirement of subject-specific calibration data by leveraging large-scale data from other subjects. The efficacy of this method was validated in a passive BCI that was designed to detect neurocognitive lapses during driving. With the help of large-scale data, the proposed TL approach outperformed the within-subject approach while considerably reducing the amount of calibration data required for each individual (∼1.5 min of data from each individual as opposed to a 90 min pilot session used in a standard within-subject approach). This demonstration might considerably facilitate the real-world applications of BCIs

    Large-scale Foundation Models and Generative AI for BigData Neuroscience

    Full text link
    Recent advances in machine learning have made revolutionary breakthroughs in computer games, image and natural language understanding, and scientific discovery. Foundation models and large-scale language models (LLMs) have recently achieved human-like intelligence thanks to BigData. With the help of self-supervised learning (SSL) and transfer learning, these models may potentially reshape the landscapes of neuroscience research and make a significant impact on the future. Here we present a mini-review on recent advances in foundation models and generative AI models as well as their applications in neuroscience, including natural language and speech, semantic memory, brain-machine interfaces (BMIs), and data augmentation. We argue that this paradigm-shift framework will open new avenues for many neuroscience research directions and discuss the accompanying challenges and opportunities

    EEG-Based User Reaction Time Estimation Using Riemannian Geometry Features

    Full text link
    Riemannian geometry has been successfully used in many brain-computer interface (BCI) classification problems and demonstrated superior performance. In this paper, for the first time, it is applied to BCI regression problems, an important category of BCI applications. More specifically, we propose a new feature extraction approach for Electroencephalogram (EEG) based BCI regression problems: a spatial filter is first used to increase the signal quality of the EEG trials and also to reduce the dimensionality of the covariance matrices, and then Riemannian tangent space features are extracted. We validate the performance of the proposed approach in reaction time estimation from EEG signals measured in a large-scale sustained-attention psychomotor vigilance task, and show that compared with the traditional powerband features, the tangent space features can reduce the root mean square estimation error by 4.30-8.30%, and increase the estimation correlation coefficient by 6.59-11.13%.Comment: arXiv admin note: text overlap with arXiv:1702.0291

    An Accurate EEGNet-based Motor-Imagery Brain-Computer Interface for Low-Power Edge Computing

    Full text link
    This paper presents an accurate and robust embedded motor-imagery brain-computer interface (MI-BCI). The proposed novel model, based on EEGNet, matches the requirements of memory footprint and computational resources of low-power microcontroller units (MCUs), such as the ARM Cortex-M family. Furthermore, the paper presents a set of methods, including temporal downsampling, channel selection, and narrowing of the classification window, to further scale down the model to relax memory requirements with negligible accuracy degradation. Experimental results on the Physionet EEG Motor Movement/Imagery Dataset show that standard EEGNet achieves 82.43%, 75.07%, and 65.07% classification accuracy on 2-, 3-, and 4-class MI tasks in global validation, outperforming the state-of-the-art (SoA) convolutional neural network (CNN) by 2.05%, 5.25%, and 5.48%. Our novel method further scales down the standard EEGNet at a negligible accuracy loss of 0.31% with 7.6x memory footprint reduction and a small accuracy loss of 2.51% with 15x reduction. The scaled models are deployed on a commercial Cortex-M4F MCU taking 101ms and consuming 4.28mJ per inference for operating the smallest model, and on a Cortex-M7 with 44ms and 18.1mJ per inference for the medium-sized model, enabling a fully autonomous, wearable, and accurate low-power BCI
    • …
    corecore