253 research outputs found
Assessing the quality of steady-state visual-evoked potentials for moving humans using a mobile electroencephalogram headset.
Recent advances in mobile electroencephalogram (EEG) systems, featuring non-prep dry electrodes and wireless telemetry, have enabled and promoted the applications of mobile brain-computer interfaces (BCIs) in our daily life. Since the brain may behave differently while people are actively situated in ecologically-valid environments versus highly-controlled laboratory environments, it remains unclear how well the current laboratory-oriented BCI demonstrations can be translated into operational BCIs for users with naturalistic movements. Understanding inherent links between natural human behaviors and brain activities is the key to ensuring the applicability and stability of mobile BCIs. This study aims to assess the quality of steady-state visual-evoked potentials (SSVEPs), which is one of promising channels for functioning BCI systems, recorded using a mobile EEG system under challenging recording conditions, e.g., walking. To systematically explore the effects of walking locomotion on the SSVEPs, this study instructed subjects to stand or walk on a treadmill running at speeds of 1, 2, and 3 mile (s) per hour (MPH) while concurrently perceiving visual flickers (11 and 12 Hz). Empirical results of this study showed that the SSVEP amplitude tended to deteriorate when subjects switched from standing to walking. Such SSVEP suppression could be attributed to the walking locomotion, leading to distinctly deteriorated SSVEP detectability from standing (84.87 ± 13.55%) to walking (1 MPH: 83.03 ± 13.24%, 2 MPH: 79.47 ± 13.53%, and 3 MPH: 75.26 ± 17.89%). These findings not only demonstrated the applicability and limitations of SSVEPs recorded from freely behaving humans in realistic environments, but also provide useful methods and techniques for boosting the translation of the BCI technology from laboratory demonstrations to practical applications
Frequency Recognition in SSVEP-based BCI using Multiset Canonical Correlation Analysis
Canonical correlation analysis (CCA) has been one of the most popular methods
for frequency recognition in steady-state visual evoked potential (SSVEP)-based
brain-computer interfaces (BCIs). Despite its efficiency, a potential problem
is that using pre-constructed sine-cosine waves as the required reference
signals in the CCA method often does not result in the optimal recognition
accuracy due to their lack of features from the real EEG data. To address this
problem, this study proposes a novel method based on multiset canonical
correlation analysis (MsetCCA) to optimize the reference signals used in the
CCA method for SSVEP frequency recognition. The MsetCCA method learns multiple
linear transforms that implement joint spatial filtering to maximize the
overall correlation among canonical variates, and hence extracts SSVEP common
features from multiple sets of EEG data recorded at the same stimulus
frequency. The optimized reference signals are formed by combination of the
common features and completely based on training data. Experimental study with
EEG data from ten healthy subjects demonstrates that the MsetCCA method
improves the recognition accuracy of SSVEP frequency in comparison with the CCA
method and other two competing methods (multiway CCA (MwayCCA) and phase
constrained CCA (PCCA)), especially for a small number of channels and a short
time window length. The superiority indicates that the proposed MsetCCA method
is a new promising candidate for frequency recognition in SSVEP-based BCIs
Data Analytics in Steady-State Visual Evoked Potential-based Brain-Computer Interface: A Review
Electroencephalograph (EEG) has been widely applied for brain-computer interface (BCI) which enables paralyzed people to directly communicate with and control of external devices, due to its portability, high temporal resolution, ease of use and low cost. Of various EEG paradigms, steady-state visual evoked potential (SSVEP)-based BCI system which uses multiple visual stimuli (such as LEDs or boxes on a computer screen) flickering at different frequencies has been widely explored in the past decades due to its fast communication rate and high signal-to-noise ratio. In this paper, we review the current research in SSVEP-based BCI, focusing on the data analytics that enables continuous, accurate detection of SSVEPs and thus high information transfer rate. The main technical challenges, including signal pre-processing, spectrum analysis, signal decomposition, spatial filtering in particular canonical correlation analysis and its variations, and classification techniques are described in this paper. Research challenges and opportunities in spontaneous brain activities, mental fatigue, transfer learning as well as hybrid BCI are also discussed
A Robust and Self-Paced BCI System Based on a Four Class SSVEP Paradigm: Algorithms and Protocols for a High-Transfer-Rate Direct Brain Communication
In this paper, we present, with particular focus on the adopted processing and identification chain and protocol-related solutions, a whole self-paced brain-computer interface system based on a 4-class steady-state visual evoked potentials (SSVEPs) paradigm. The proposed system incorporates an automated spatial filtering technique centred on the common spatial patterns (CSPs) method, an autoscaled and effective signal features extraction which is used for providing an unsupervised biofeedback, and a robust self-paced classifier based on the discriminant analysis theory. The adopted operating protocol is structured in a screening, training, and testing phase aimed at collecting user-specific information regarding best stimulation frequencies, optimal sources identification, and overall system processing chain calibration in only a few minutes. The system, validated on 11 healthy/pathologic subjects, has proven to be reliable in terms of achievable communication speed (up to 70 bit/min) and very robust to false positive identifications
Cross-subject dual-domain fusion network with task-related and task-discriminant component analysis enhancing one-shot SSVEP classification
This study addresses the significant challenge of developing efficient
decoding algorithms for classifying steady-state visual evoked potentials
(SSVEPs) in scenarios characterized by extreme scarcity of calibration data,
where only one calibration is available for each stimulus target. To tackle
this problem, we introduce a novel cross-subject dual-domain fusion network
(CSDuDoFN) incorporating task-related and task-discriminant component analysis
(TRCA and TDCA) for one-shot SSVEP classification. The CSDuDoFN framework is
designed to comprehensively transfer information from source subjects, while
TRCA and TDCA are employed to exploit the single available calibration of the
target subject. Specifically, we develop multi-reference least-squares
transformation (MLST) to map data from both source subjects and the target
subject into the domain of sine-cosine templates, thereby mitigating
inter-individual variability and benefiting transfer learning. Subsequently,
the transformed data in the sine-cosine templates domain and the original
domain data are separately utilized to train a convolutional neural network
(CNN) model, with the adequate fusion of their feature maps occurring at
distinct network layers. To further capitalize on the calibration of the target
subject, source aliasing matrix estimation (SAME) data augmentation is
incorporated into the training process of the ensemble TRCA (eTRCA) and TDCA
models. Ultimately, the outputs of the CSDuDoFN, eTRCA, and TDCA are combined
for SSVEP classification. The effectiveness of our proposed approach is
comprehensively evaluated on three publicly available SSVEP datasets, achieving
the best performance on two datasets and competitive performance on one. This
underscores the potential for integrating brain-computer interface (BCI) into
daily life.Comment: 10 pages,6 figures, and 3 table
Source Free Domain Adaptation of a DNN for SSVEP-based Brain-Computer Interfaces
This paper presents a source free domain adaptation method for steady-state
visually evoked potential (SSVEP) based brain-computer interface (BCI)
spellers. SSVEP-based BCI spellers help individuals experiencing speech
difficulties, enabling them to communicate at a fast rate. However, achieving a
high information transfer rate (ITR) in the current methods requires an
extensive calibration period before using the system, leading to discomfort for
new users. We address this issue by proposing a method that adapts the deep
neural network (DNN) pre-trained on data from source domains (participants of
previous experiments conducted for labeled data collection), using only the
unlabeled data of the new user (target domain). This adaptation is achieved by
minimizing our proposed custom loss function composed of self-adaptation and
local-regularity loss terms. The self-adaptation term uses the pseudo-label
strategy, while the novel local-regularity term exploits the data structure and
forces the DNN to assign the same labels to adjacent instances. Our method
achieves striking 201.15 bits/min and 145.02 bits/min ITRs on the benchmark and
BETA datasets, respectively, and outperforms the state-of-the-art alternative
techniques. Our approach alleviates user discomfort and shows excellent
identification performance, so it would potentially contribute to the broader
application of SSVEP-based BCI systems in everyday life.Comment: 11 pages (including one page appendix), 5 figure
A Transformer-based deep neural network model for SSVEP classification
Steady-state visual evoked potential (SSVEP) is one of the most commonly used
control signal in the brain-computer interface (BCI) systems. However, the
conventional spatial filtering methods for SSVEP classification highly depend
on the subject-specific calibration data. The need for the methods that can
alleviate the demand for the calibration data become urgent. In recent years,
developing the methods that can work in inter-subject classification scenario
has become a promising new direction. As the popular deep learning model
nowadays, Transformer has excellent performance and has been used in EEG signal
classification tasks. Therefore, in this study, we propose a deep learning
model for SSVEP classification based on Transformer structure in inter-subject
classification scenario, termed as SSVEPformer, which is the first application
of the transformer to the classification of SSVEP. Inspired by previous
studies, the model adopts the frequency spectrum of SSVEP data as input, and
explores the spectral and spatial domain information for classification.
Furthermore, to fully utilize the harmonic information, an extended SSVEPformer
based on the filter bank technology (FB-SSVEPformer) is proposed to further
improve the classification performance. Experiments were conducted using two
open datasets (Dataset 1: 10 subjects, 12-class task; Dataset 2: 35 subjects,
40-class task) in the inter-subject classification scenario. The experimental
results show that the proposed models could achieve better results in terms of
classification accuracy and information transfer rate, compared with other
baseline methods. The proposed model validates the feasibility of deep learning
models based on Transformer structure for SSVEP classification task, and could
serve as a potential model to alleviate the calibration procedure in the
practical application of SSVEP-based BCI systems
Electroencephalogram Signal Processing For Hybrid Brain Computer Interface Systems
The goal of this research was to evaluate and compare three types of brain computer interface (BCI) systems, P300, steady state visually evoked potentials (SSVEP) and Hybrid as virtual spelling paradigms. Hybrid BCI is an innovative approach to combine the P300 and SSVEP. However, it is challenging to process the resulting hybrid signals to extract both information simultaneously and effectively. The major step executed toward the advancement to modern BCI system was to move the BCI techniques from traditional LED system to electronic LCD monitor. Such a transition allows not only to develop the graphics of interest but also to generate objects flickering at different frequencies. There were pilot experiments performed for designing and tuning the parameters of the spelling paradigms including peak detection for different range of frequencies of SSVEP BCI, placement of objects on LCD monitor, design of the spelling keyboard, and window time for the SSVEP peak detection processing. All the experiments were devised to evaluate the performance in terms of the spelling accuracy, region error, and adjacency error among all of the paradigms: P300, SSVEP and Hybrid. Due to the different nature of P300 and SSVEP, designing a hybrid P300-SSVEP signal processing scheme demands significant amount of research work in this area. Eventually, two critical questions in hybrid BCl are: (1) which signal processing strategy can best measure the user\u27s intent and (2) what a suitable paradigm is to fuse these two techniques in a simple but effective way. In order to answer these questions, this project focused mainly on developing signal processing and classification technique for hybrid BCI. Hybrid BCI was implemented by extracting the specific information from brain signals, selecting optimum features which contain maximum discrimination information about the speller characters of our interest and by efficiently classifying the hybrid signals. The designed spellers were developed with the aim to improve quality of life of patients with disability by utilizing visually controlled BCI paradigms. The paradigms consist of electrodes to record electroencephalogram signal (EEG) during stimulation, a software to analyze the collected data, and a computing device where the subject’s EEG is the input to estimate the spelled character. Signal processing phase included preliminary tasks as preprocessing, feature extraction, and feature selection. Captured EEG data are usually a superposition of the signals of interest with other unwanted signals from muscles, and from non-biological artifacts. The accuracy of each trial and average accuracy for subjects were computed. Overall, the average accuracy of the P300 and SSVEP spelling paradigm was 84% and 68.5 %. P300 spelling paradigms have better accuracy than both the SSVEP and hybrid paradigm. Hybrid paradigm has the average accuracy of 79 %. However, hybrid system is faster in time and more soothing to look than other paradigms. This work is significant because it has great potential for improving the BCI research in design and application of clinically suitable speller paradigm
- …