203 research outputs found
Data Analytics in Steady-State Visual Evoked Potential-based Brain-Computer Interface: A Review
Electroencephalograph (EEG) has been widely applied for brain-computer interface (BCI) which enables paralyzed people to directly communicate with and control of external devices, due to its portability, high temporal resolution, ease of use and low cost. Of various EEG paradigms, steady-state visual evoked potential (SSVEP)-based BCI system which uses multiple visual stimuli (such as LEDs or boxes on a computer screen) flickering at different frequencies has been widely explored in the past decades due to its fast communication rate and high signal-to-noise ratio. In this paper, we review the current research in SSVEP-based BCI, focusing on the data analytics that enables continuous, accurate detection of SSVEPs and thus high information transfer rate. The main technical challenges, including signal pre-processing, spectrum analysis, signal decomposition, spatial filtering in particular canonical correlation analysis and its variations, and classification techniques are described in this paper. Research challenges and opportunities in spontaneous brain activities, mental fatigue, transfer learning as well as hybrid BCI are also discussed
Cross-subject dual-domain fusion network with task-related and task-discriminant component analysis enhancing one-shot SSVEP classification
This study addresses the significant challenge of developing efficient
decoding algorithms for classifying steady-state visual evoked potentials
(SSVEPs) in scenarios characterized by extreme scarcity of calibration data,
where only one calibration is available for each stimulus target. To tackle
this problem, we introduce a novel cross-subject dual-domain fusion network
(CSDuDoFN) incorporating task-related and task-discriminant component analysis
(TRCA and TDCA) for one-shot SSVEP classification. The CSDuDoFN framework is
designed to comprehensively transfer information from source subjects, while
TRCA and TDCA are employed to exploit the single available calibration of the
target subject. Specifically, we develop multi-reference least-squares
transformation (MLST) to map data from both source subjects and the target
subject into the domain of sine-cosine templates, thereby mitigating
inter-individual variability and benefiting transfer learning. Subsequently,
the transformed data in the sine-cosine templates domain and the original
domain data are separately utilized to train a convolutional neural network
(CNN) model, with the adequate fusion of their feature maps occurring at
distinct network layers. To further capitalize on the calibration of the target
subject, source aliasing matrix estimation (SAME) data augmentation is
incorporated into the training process of the ensemble TRCA (eTRCA) and TDCA
models. Ultimately, the outputs of the CSDuDoFN, eTRCA, and TDCA are combined
for SSVEP classification. The effectiveness of our proposed approach is
comprehensively evaluated on three publicly available SSVEP datasets, achieving
the best performance on two datasets and competitive performance on one. This
underscores the potential for integrating brain-computer interface (BCI) into
daily life.Comment: 10 pages,6 figures, and 3 table
An Unsupervised Channel Selection Method for SSVEP-based Brain Computer Interfaces
Brain-computer interfaces (BCIs) provide an alternative communication channel for people with motor deficits that prevent normal communication. The underlying premise of a BCI is that a neuroimaging process such as electroencephalography (EEG) can be used to measure the user’s brain activity as signals. The obtained signals are analyzed to determine the user’s intended actions and a computer system can be used to replace voluntary muscle activity as a means of communication. The information transfer rate (ITR) of an algorithm used for determining the user’s intentions greatly affects the perceived practicality of the BCI system. Such algorithms are divided into two main categories, supervised and unsupervised. While the former achieves higher ITR, the latter is most useful when the user is unable to be involved in the calibration process of the BCI system.
In our paper, we introduce an unsupervised algorithm for steady-state visual evoked potential (SSVEP)-based BCIs. Our algorithm works in three steps: (i) it selects multiple sets of electroencephalogram channels, then (ii) applies a feature extraction method to each one of these channel sets. As its final step, (iii) it combines the extracted features from these channel sets by performing a majority vote, yielding a classification. We evaluate the ITR attained using our proposed method on a dataset of 35 subjects using three different feature extraction methods. We then compare these results to existing methods in the literature that use a single channel set without a majority vote. The proposed method indicates an improvement for at least 7 subjects
A Transformer-based deep neural network model for SSVEP classification
Steady-state visual evoked potential (SSVEP) is one of the most commonly used
control signal in the brain-computer interface (BCI) systems. However, the
conventional spatial filtering methods for SSVEP classification highly depend
on the subject-specific calibration data. The need for the methods that can
alleviate the demand for the calibration data become urgent. In recent years,
developing the methods that can work in inter-subject classification scenario
has become a promising new direction. As the popular deep learning model
nowadays, Transformer has excellent performance and has been used in EEG signal
classification tasks. Therefore, in this study, we propose a deep learning
model for SSVEP classification based on Transformer structure in inter-subject
classification scenario, termed as SSVEPformer, which is the first application
of the transformer to the classification of SSVEP. Inspired by previous
studies, the model adopts the frequency spectrum of SSVEP data as input, and
explores the spectral and spatial domain information for classification.
Furthermore, to fully utilize the harmonic information, an extended SSVEPformer
based on the filter bank technology (FB-SSVEPformer) is proposed to further
improve the classification performance. Experiments were conducted using two
open datasets (Dataset 1: 10 subjects, 12-class task; Dataset 2: 35 subjects,
40-class task) in the inter-subject classification scenario. The experimental
results show that the proposed models could achieve better results in terms of
classification accuracy and information transfer rate, compared with other
baseline methods. The proposed model validates the feasibility of deep learning
models based on Transformer structure for SSVEP classification task, and could
serve as a potential model to alleviate the calibration procedure in the
practical application of SSVEP-based BCI systems
High-performance cVEP-BCI under minimal calibration
The ultimate goal of brain-computer interfaces (BCIs) based on visual
modulation paradigms is to achieve high-speed performance without the burden of
extensive calibration. Code-modulated visual evoked potential-based BCIs
(cVEP-BCIs) modulated by broadband white noise (WN) offer various advantages,
including increased communication speed, expanded encoding target capabilities,
and enhanced coding flexibility. However, the complexity of the
spatial-temporal patterns under broadband stimuli necessitates extensive
calibration for effective target identification in cVEP-BCIs. Consequently, the
information transfer rate (ITR) of cVEP-BCI under limited calibration usually
stays around 100 bits per minute (bpm), significantly lagging behind
state-of-the-art steady-state visual evoked potential-based BCIs (SSVEP-BCIs),
which achieve rates above 200 bpm. To enhance the performance of cVEP-BCIs with
minimal calibration, we devised an efficient calibration stage involving a
brief single-target flickering, lasting less than a minute, to extract
generalizable spatial-temporal patterns. Leveraging the calibration data, we
developed two complementary methods to construct cVEP temporal patterns: the
linear modeling method based on the stimulus sequence and the transfer learning
techniques using cross-subject data. As a result, we achieved the highest ITR
of 250 bpm under a minute of calibration, which has been shown to be comparable
to the state-of-the-art SSVEP paradigms. In summary, our work significantly
improved the cVEP performance under few-shot learning, which is expected to
expand the practicality and usability of cVEP-BCIs.Comment: 35 pages, 5 figure
- …