47 research outputs found

    A Transformer-based deep neural network model for SSVEP classification

    Full text link
    Steady-state visual evoked potential (SSVEP) is one of the most commonly used control signal in the brain-computer interface (BCI) systems. However, the conventional spatial filtering methods for SSVEP classification highly depend on the subject-specific calibration data. The need for the methods that can alleviate the demand for the calibration data become urgent. In recent years, developing the methods that can work in inter-subject classification scenario has become a promising new direction. As the popular deep learning model nowadays, Transformer has excellent performance and has been used in EEG signal classification tasks. Therefore, in this study, we propose a deep learning model for SSVEP classification based on Transformer structure in inter-subject classification scenario, termed as SSVEPformer, which is the first application of the transformer to the classification of SSVEP. Inspired by previous studies, the model adopts the frequency spectrum of SSVEP data as input, and explores the spectral and spatial domain information for classification. Furthermore, to fully utilize the harmonic information, an extended SSVEPformer based on the filter bank technology (FB-SSVEPformer) is proposed to further improve the classification performance. Experiments were conducted using two open datasets (Dataset 1: 10 subjects, 12-class task; Dataset 2: 35 subjects, 40-class task) in the inter-subject classification scenario. The experimental results show that the proposed models could achieve better results in terms of classification accuracy and information transfer rate, compared with other baseline methods. The proposed model validates the feasibility of deep learning models based on Transformer structure for SSVEP classification task, and could serve as a potential model to alleviate the calibration procedure in the practical application of SSVEP-based BCI systems

    On Tackling Fundamental Constraints in Brain-Computer Interface Decoding via Deep Neural Networks

    Get PDF
    A Brain-Computer Interface (BCI) is a system that provides a communication and control medium between human cortical signals and external devices, with the primary aim to assist or to be used by patients who suffer from a neuromuscular disease. Despite significant recent progress in the area of BCI, there are numerous shortcomings associated with decoding Electroencephalography-based BCI signals in real-world environments. These include, but are not limited to, the cumbersome nature of the equipment, complications in collecting large quantities of real-world data, the rigid experimentation protocol and the challenges of accurate signal decoding, especially in making a system work in real-time. Hence, the core purpose of this work is to investigate improving the applicability and usability of BCI systems, whilst preserving signal decoding accuracy. Recent advances in Deep Neural Networks (DNN) provide the possibility for signal processing to automatically learn the best representation of a signal, contributing to improved performance even with a noisy input signal. Subsequently, this thesis focuses on the use of novel DNN-based approaches for tackling some of the key underlying constraints within the area of BCI. For example, recent technological improvements in acquisition hardware have made it possible to eliminate the pre-existing rigid experimentation procedure, albeit resulting in noisier signal capture. However, through the use of a DNN-based model, it is possible to preserve the accuracy of the predictions from the decoded signals. Moreover, this research demonstrates that by leveraging DNN-based image and signal understanding, it is feasible to facilitate real-time BCI applications in a natural environment. Additionally, the capability of DNN to generate realistic synthetic data is shown to be a potential solution in reducing the requirement for costly data collection. Work is also performed in addressing the well-known issues regarding subject bias in BCI models by generating data with reduced subject-specific features. The overall contribution of this thesis is to address the key fundamental limitations of BCI systems. This includes the unyielding traditional experimentation procedure, the mandatory extended calibration stage and sustaining accurate signal decoding in real-time. These limitations lead to a fragile BCI system that is demanding to use and only suited for deployment in a controlled laboratory. Overall contributions of this research aim to improve the robustness of BCI systems and enable new applications for use in the real-world

    Enhancing the Decoding Performance of Steady-State Visual Evoked Potentials based Brain-Computer Interface

    Get PDF
    Non-invasive Brain-Computer Interfaces (BCIs) based on steady-state visual evoked potential (SSVEP) responses are the most widely used BCI. SSVEP are responses elicited in the visual cortex when a user gazes at an object flickering at a certain frequency. In this thesis, we investigate different BCI system design parameters for enhancing the detection of SSVEP such as change in inter-stimulus distance (ISD), EEG channels, detection algorithms and training methodologies. Closely placed SSVEP stimuli compete for neural representations. This influences the performance and limits the flexibility of the stimulus interface. In this thesis, we study the influence of changing ISD on the decoding performance of an SSVEP BCI. We propose: (i) a user-specific channel selection method and (ii) using complex spectrum features as input to a convolutional neural network (C-CNN) to overcome this challenge. We also evaluate the proposed C-CNN method in a user-independent (UI) training scenario as this will lead to a minimal calibration system and provide the ability to run inference in a plug-and-play mode. The proposed methods were evaluated on a 7-class SSVEP dataset collected on 21 healthy participants (Dataset 1). The UI method was also assessed on a publicly available 12-class dataset collected on 10 healthy participants (Dataset 2). We compared the proposed methods with canonical correlation analysis (CCA) and CNN classification using magnitude spectrum features (M-CNN). We demonstrated that the user-specific channel set (UC) is robust to change in ISD (viewing angles of 5.24ᵒ, 8.53ᵒ, and 12.23ᵒ) compared to the classic 3-channel set (3C - O1, O2, Oz) and 6-channel set (6C - PO3, PO4, POz, O1, O2, Oz). A significant improvement in accuracy of over 5% (p=0.001) and a reduction in variation of 56% (p=0.035) was achieved across ISDs using the UC set compared to the 3C set and 6C set. Secondly, the proposed C-CNN method obtained a significantly higher classification accuracy across ISDs and window lengths compared to M-CNN and CCA. The average accuracy of the C-CNN increased by over 12.8% compared to CCA and an increase of over 6.5% compared to the M-CNN for the closest ISD across all window lengths was achieved. Thirdly, the C-CNN method achieved the highest accuracy in both UD and UI training scenarios on both 7-class and 12-class SSVEP Datasets. The overall accuracies of the different methods for 1 s window length for Dataset 1 were: CCA: 69.1±10.8%, UI-M-CNN: 73.5±16.1%, UI-C-CNN: 81.6±12.3%, UD-M-CNN: 87.8±7.6% and UD-C-CNN: 92.5±5%. And for Dataset 2 were: CCA: 62.7±21.5%, UI-M-CNN: 70.5±22%, UI-C-CNN: 81.6±18%, UD-M-CNN: 82.8±16.7%, and UD-C-CNN: 92.3±11.1%. In summary, using the complex spectrum features, the C-CNN likely learned to use both frequency and phase related information to classify SSVEP responses. Therefore, the CNN can be trained independent of the ISD resulting in a model that generalizes to other ISDs. This suggests that the proposed methods are robust to changes in inter-stimulus distance for SSVEP detection and provides increased flexibility for user interface design of SSVEP BCIs for commercial applications. Finally, the UI method provides a virtually calibration free approach to SSVEP BCI

    Data Analytics in Steady-State Visual Evoked Potential-based Brain-Computer Interface: A Review

    Get PDF
    Electroencephalograph (EEG) has been widely applied for brain-computer interface (BCI) which enables paralyzed people to directly communicate with and control of external devices, due to its portability, high temporal resolution, ease of use and low cost. Of various EEG paradigms, steady-state visual evoked potential (SSVEP)-based BCI system which uses multiple visual stimuli (such as LEDs or boxes on a computer screen) flickering at different frequencies has been widely explored in the past decades due to its fast communication rate and high signal-to-noise ratio. In this paper, we review the current research in SSVEP-based BCI, focusing on the data analytics that enables continuous, accurate detection of SSVEPs and thus high information transfer rate. The main technical challenges, including signal pre-processing, spectrum analysis, signal decomposition, spatial filtering in particular canonical correlation analysis and its variations, and classification techniques are described in this paper. Research challenges and opportunities in spontaneous brain activities, mental fatigue, transfer learning as well as hybrid BCI are also discussed

    EEG-based brain-computer interfaces using motor-imagery: techniques and challenges.

    Get PDF
    Electroencephalography (EEG)-based brain-computer interfaces (BCIs), particularly those using motor-imagery (MI) data, have the potential to become groundbreaking technologies in both clinical and entertainment settings. MI data is generated when a subject imagines the movement of a limb. This paper reviews state-of-the-art signal processing techniques for MI EEG-based BCIs, with a particular focus on the feature extraction, feature selection and classification techniques used. It also summarizes the main applications of EEG-based BCIs, particularly those based on MI data, and finally presents a detailed discussion of the most prevalent challenges impeding the development and commercialization of EEG-based BCIs

    Robust asynchronous control of ERP-Based brain-Computer interfaces using deep learning

    Get PDF
    Producción CientíficaBackground and Objective. Brain-computer interfaces (BCI) based on event-related potentials (ERP) are a promising technology for alternative and augmented communication in an assistive context. However, most approaches to date are synchronous, requiring the intervention of a supervisor when the user wishes to turn his attention away from the BCI system. In order to bring these BCIs into real-life applications, a robust asynchronous control of the system is required through monitoring of user attention. Despite the great importance of this limitation, which prevents the deployment of these systems outside the laboratory, it is often overlooked in research articles. This study was aimed to propose a novel method to solve this problem, taking advantage of deep learning for the first time in this context to overcome the limitations of previous strategies based on hand-crafted features. Methods. The proposed method, based on EEG-Inception, a novel deep convolutional neural network, divides the problem in 2 stages to achieve the asynchronous control: (i) the model detects user’s control state, and (ii) decodes the command only if the user is attending to the stimuli. Additionally, we used transfer learning to reduce the calibration time, even exploring a calibration-less approach. Results. Our method was evaluated with 22 healthy subjects, analyzing the impact of the calibration time and number of stimulation sequences on the system’s performance. For the control state detection stage, we report average accuracies above 91% using only 1 sequence of stimulation and 30 calibration trials, reaching a maximum of 96.95% with 15 sequences. Moreover, our calibration-less approach also achieved suitable results, with a maximum accuracy of 89.36%, showing the benefits of transfer learning. As for the overall asynchronous system, which includes both stages, the maximum information transfer rate was 35.54 bpm, a suitable value for high-speed communication. Conclusions. The proposed strategy achieved higher performance with less calibration trials and stimulation sequences than former approaches, representing a promising step forward that paves the way for more practical applications of ERP-based spellers.Ministerio de Ciencia, Innovación y Universidades - Agencia Estatal de Investigación (grants PID2020-115468RB-I00 and RTC2019-007350-1)Comisión Europea - Fondo Europeo de Desarrollo Regional (cooperation programme Interreg V-A Spain-Portugal POCTEP 2014–2020

    Design and Evaluation of a Hardware System for Online Signal Processing within Mobile Brain-Computer Interfaces

    Get PDF
    Brain-Computer Interfaces (BCIs) sind innovative Systeme, die eine direkte Kommunikation zwischen dem Gehirn und externen Geräten ermöglichen. Diese Schnittstellen haben sich zu einer transformativen Lösung nicht nur für Menschen mit neurologischen Verletzungen entwickelt, sondern auch für ein breiteres Spektrum von Menschen, das sowohl medizinische als auch nicht-medizinische Anwendungen umfasst. In der Vergangenheit hat die Herausforderung, dass neurologische Verletzungen nach einer anfänglichen Erholungsphase statisch bleiben, die Forscher dazu veranlasst, innovative Wege zu beschreiten. Seit den 1970er Jahren stehen BCIs an vorderster Front dieser Bemühungen. Mit den Fortschritten in der Forschung haben sich die BCI-Anwendungen erweitert und zeigen ein großes Potenzial für eine Vielzahl von Anwendungen, auch für weniger stark eingeschränkte (zum Beispiel im Kontext von Hörelektronik) sowie völlig gesunde Menschen (zum Beispiel in der Unterhaltungsindustrie). Die Zukunft der BCI-Forschung hängt jedoch auch von der Verfügbarkeit zuverlässiger BCI-Hardware ab, die den Einsatz in der realen Welt gewährleistet. Das im Rahmen dieser Arbeit konzipierte und implementierte CereBridge-System stellt einen bedeutenden Fortschritt in der Brain-Computer-Interface-Technologie dar, da es die gesamte Hardware zur Erfassung und Verarbeitung von EEG-Signalen in ein mobiles System integriert. Die Architektur der Verarbeitungshardware basiert auf einem FPGA mit einem ARM Cortex-M3 innerhalb eines heterogenen ICs, was Flexibilität und Effizienz bei der EEG-Signalverarbeitung gewährleistet. Der modulare Aufbau des Systems, bestehend aus drei einzelnen Boards, gewährleistet die Anpassbarkeit an unterschiedliche Anforderungen. Das komplette System wird an der Kopfhaut befestigt, kann autonom arbeiten, benötigt keine externe Interaktion und wiegt einschließlich der 16-Kanal-EEG-Sensoren nur ca. 56 g. Der Fokus liegt auf voller Mobilität. Das vorgeschlagene anpassbare Datenflusskonzept erleichtert die Untersuchung und nahtlose Integration von Algorithmen und erhöht die Flexibilität des Systems. Dies wird auch durch die Möglichkeit unterstrichen, verschiedene Algorithmen auf EEG-Daten anzuwenden, um unterschiedliche Anwendungsziele zu erreichen. High-Level Synthesis (HLS) wurde verwendet, um die Algorithmen auf das FPGA zu portieren, was den Algorithmenentwicklungsprozess beschleunigt und eine schnelle Implementierung von Algorithmusvarianten ermöglicht. Evaluierungen haben gezeigt, dass das CereBridge-System in der Lage ist, die gesamte Signalverarbeitungskette zu integrieren, die für verschiedene BCI-Anwendungen erforderlich ist. Darüber hinaus kann es mit einer Batterie von mehr als 31 Stunden Dauerbetrieb betrieben werden, was es zu einer praktikablen Lösung für mobile Langzeit-EEG-Aufzeichnungen und reale BCI-Studien macht. Im Vergleich zu bestehenden Forschungsplattformen bietet das CereBridge-System eine bisher unerreichte Leistungsfähigkeit und Ausstattung für ein mobiles BCI. Es erfüllt nicht nur die relevanten Anforderungen an ein mobiles BCI-System, sondern ebnet auch den Weg für eine schnelle Übertragung von Algorithmen aus dem Labor in reale Anwendungen. Im Wesentlichen liefert diese Arbeit einen umfassenden Entwurf für die Entwicklung und Implementierung eines hochmodernen mobilen EEG-basierten BCI-Systems und setzt damit einen neuen Standard für BCI-Hardware, die in der Praxis eingesetzt werden kann.Brain-Computer Interfaces (BCIs) are innovative systems that enable direct communication between the brain and external devices. These interfaces have emerged as a transformative solution not only for individuals with neurological injuries, but also for a broader range of individuals, encompassing both medical and non-medical applications. Historically, the challenge of neurological injury being static after an initial recovery phase has driven researchers to explore innovative avenues. Since the 1970s, BCIs have been at one forefront of these efforts. As research has progressed, BCI applications have expanded, showing potential in a wide range of applications, including those for less severely disabled (e.g. in the context of hearing aids) and completely healthy individuals (e.g. entertainment industry). However, the future of BCI research also depends on the availability of reliable BCI hardware to ensure real-world application. The CereBridge system designed and implemented in this work represents a significant leap forward in brain-computer interface technology by integrating all EEG signal acquisition and processing hardware into a mobile system. The processing hardware architecture is centered around an FPGA with an ARM Cortex-M3 within a heterogeneous IC, ensuring flexibility and efficiency in EEG signal processing. The modular design of the system, consisting of three individual boards, ensures adaptability to different requirements. With a focus on full mobility, the complete system is mounted on the scalp, can operate autonomously, requires no external interaction, and weighs approximately 56g, including 16 channel EEG sensors. The proposed customizable dataflow concept facilitates the exploration and seamless integration of algorithms, increasing the flexibility of the system. This is further underscored by the ability to apply different algorithms to recorded EEG data to meet different application goals. High-Level Synthesis (HLS) was used to port algorithms to the FPGA, accelerating the algorithm development process and facilitating rapid implementation of algorithm variants. Evaluations have shown that the CereBridge system is capable of integrating the complete signal processing chain required for various BCI applications. Furthermore, it can operate continuously for more than 31 hours with a 1800mAh battery, making it a viable solution for long-term mobile EEG recording and real-world BCI studies. Compared to existing research platforms, the CereBridge system offers unprecedented performance and features for a mobile BCI. It not only meets the relevant requirements for a mobile BCI system, but also paves the way for the rapid transition of algorithms from the laboratory to real-world applications. In essence, this work provides a comprehensive blueprint for the development and implementation of a state-of-the-art mobile EEG-based BCI system, setting a new benchmark in BCI hardware for real-world applicability
    corecore