2,692 research outputs found
Advancing Pattern Recognition Techniques for Brain-Computer Interfaces: Optimizing Discriminability, Compactness, and Robustness
In dieser Dissertation formulieren wir drei zentrale Zielkriterien zur systematischen Weiterentwicklung der Mustererkennung moderner Brain-Computer Interfaces (BCIs). Darauf aufbauend wird ein Rahmenwerk zur Mustererkennung von BCIs entwickelt, das die drei Zielkriterien durch einen neuen Optimierungsalgorithmus vereint. DarĂĽber hinaus zeigen wir die erfolgreiche Umsetzung unseres Ansatzes fĂĽr zwei innovative BCI Paradigmen, fĂĽr die es bisher keine etablierte Mustererkennungsmethodik gibt
Weighted transfer learning for improving motor imagery-based brain-computer interface
One of the major limitations of motor imagery (MI)-based brain-computer interface (BCI) is its long calibration time. Due to between sessions/subjects variations in the properties of brain signals, typically a large amount of training data needs to be collected at the beginning of each session to calibrate the parameters of the BCI system for the target user. In this paper, we propose a novel transfer learning approach on the classification domain to reduce the calibration time without sacrificing the classification accuracy of MI-BCI. Thus, when only few subject-specific trials are available for training, the estimation of the classification parameters is improved by incorporating previously recorded data from other users. For this purpose, a regularization parameter is added to the objective function of the classifier to make the classification parameters as close as possible to the classification parameters of the previous users who have feature spaces similar to that of the target subject. In this study, a new similarity measure based on the kullback leibler divergence (KL) is used to measure similarity between two feature spaces obtained using subject-specific common spatial patterns (CSP). The proposed transfer learning approach is applied on the logistic regression classifier and evaluated using three datasets. The results showed that compared to the subject-specific classifier, the proposed weighted transfer learning classifier improved the classification results particularly when few subject-specific trials were available for training (p<0.05). Importantly, this improvement was more pronounced for users with medium and poor accuracy. Moreover, the statistical results showed that the proposed weighted transfer learning classifier performed significantly better than the considered comparable baseline algorithms
Cross-subject dual-domain fusion network with task-related and task-discriminant component analysis enhancing one-shot SSVEP classification
This study addresses the significant challenge of developing efficient
decoding algorithms for classifying steady-state visual evoked potentials
(SSVEPs) in scenarios characterized by extreme scarcity of calibration data,
where only one calibration is available for each stimulus target. To tackle
this problem, we introduce a novel cross-subject dual-domain fusion network
(CSDuDoFN) incorporating task-related and task-discriminant component analysis
(TRCA and TDCA) for one-shot SSVEP classification. The CSDuDoFN framework is
designed to comprehensively transfer information from source subjects, while
TRCA and TDCA are employed to exploit the single available calibration of the
target subject. Specifically, we develop multi-reference least-squares
transformation (MLST) to map data from both source subjects and the target
subject into the domain of sine-cosine templates, thereby mitigating
inter-individual variability and benefiting transfer learning. Subsequently,
the transformed data in the sine-cosine templates domain and the original
domain data are separately utilized to train a convolutional neural network
(CNN) model, with the adequate fusion of their feature maps occurring at
distinct network layers. To further capitalize on the calibration of the target
subject, source aliasing matrix estimation (SAME) data augmentation is
incorporated into the training process of the ensemble TRCA (eTRCA) and TDCA
models. Ultimately, the outputs of the CSDuDoFN, eTRCA, and TDCA are combined
for SSVEP classification. The effectiveness of our proposed approach is
comprehensively evaluated on three publicly available SSVEP datasets, achieving
the best performance on two datasets and competitive performance on one. This
underscores the potential for integrating brain-computer interface (BCI) into
daily life.Comment: 10 pages,6 figures, and 3 table
Online Covariate Shift Detection based Adaptive Brain-Computer Interface to Trigger Hand Exoskeleton Feedback for Neuro-Rehabilitation
A major issue in electroencephalogram (EEG) based brain-computer interfaces (BCIs) is the intrinsic non-stationarities in the brain waves, which may degrade the performance of the classifier, while transitioning from calibration to feedback generation phase. The non-stationary nature of the EEG data may cause its input probability distribution to vary over time, which often appear as a covariate shift. To adapt to the covariate shift, we had proposed an adaptive learning method in our previous work and tested it on offline standard datasets. This paper presents an online BCI system using previously developed covariate shift detection (CSD)-based adaptive classifier to discriminate between mental tasks and generate neurofeedback in the form of visual and exoskeleton motion. The CSD test helps prevent unnecessary retraining of the classifier. The feasibility of the developed online-BCI system was first tested on 10 healthy individuals, and then on 10 stroke patients having hand disability. A comparison of the proposed online CSD-based adaptive classifier with conventional non-adaptive classifier has shown a significantly (p<0.01) higher classification accuracy in both the cases of healthy and patient groups. The results demonstrate that the online CSD-based adaptive BCI system is superior to the non-adaptive BCI system and it is feasible to be used for actuating hand exoskeleton for the stroke-rehabilitation applications
Detecting single-trial EEG evoked potential using a wavelet domain linear mixed model: application to error potentials classification
Objective. The main goal of this work is to develop a model for multi-sensor
signals such as MEG or EEG signals, that accounts for the inter-trial
variability, suitable for corresponding binary classification problems. An
important constraint is that the model be simple enough to handle small size
and unbalanced datasets, as often encountered in BCI type experiments.
Approach. The method involves linear mixed effects statistical model, wavelet
transform and spatial filtering, and aims at the characterization of localized
discriminant features in multi-sensor signals. After discrete wavelet transform
and spatial filtering, a projection onto the relevant wavelet and spatial
channels subspaces is used for dimension reduction. The projected signals are
then decomposed as the sum of a signal of interest (i.e. discriminant) and
background noise, using a very simple Gaussian linear mixed model. Main
results. Thanks to the simplicity of the model, the corresponding parameter
estimation problem is simplified. Robust estimates of class-covariance matrices
are obtained from small sample sizes and an effective Bayes plug-in classifier
is derived. The approach is applied to the detection of error potentials in
multichannel EEG data, in a very unbalanced situation (detection of rare
events). Classification results prove the relevance of the proposed approach in
such a context. Significance. The combination of linear mixed model, wavelet
transform and spatial filtering for EEG classification is, to the best of our
knowledge, an original approach, which is proven to be effective. This paper
improves on earlier results on similar problems, and the three main ingredients
all play an important role
Novel Transfer Learning Approaches forImproving Brain Computer Interfaces
Despite several recent advances, most of the electroencephalogram(EEG)-based brain-computer interface (BCI) applications are still limited to the laboratory due to their long calibration time. Due toconsiderable inter-subject/inter-session and intra-session variations, atime-consuming and fatiguing calibration phase is typically conductedat the beginning of each new session to acquire sufficient labelled train-ing data to train the subject-specific BCI model.This thesis focuses on developing reliable machine learning algorithmsand approaches that reduce BCI calibration time while keeping accu-racy in an acceptable range. Calibration time could be reduced viatransfer learning approaches where data from other sessions or sub-jects are mined and used to compensate for the lack of labelled datafrom the current user or session. In BCI, transfer learning can beapplied on either raw EEG, feature or classification domains.In this thesis, firstly, a novel weighted transfer learning approach isproposed in the classification domain to improve the MI-based BCIperformance when only few subject-specific trials are available fortraining.Transfer learning techniques should be applied in a different domainbefore the classification domain to improve the classification accuracyfor subjects whom their subject-specific features for different classesare not separable. Thus, secondly, this thesis proposes a novel regu-larized common spatial patterns framework based on dynamic timewarping and transfer learning (DTW-R-CSP) in raw EEG and featuredomains.In previous transfer learning approaches, it is hypothesised that thereare enough labelled trials available from the previous subjects or ses-sions. However, in the case when there are no labelled trials available from other subjects or sessions, domain adaptation transfer learningcould potentially mitigate problems of having small training size byreducing variations between the testing and training trials. Thus, todeal with non-stationarity between training and testing trials, a novelensemble adaptation framework with temporal alignment is proposed
Co-adaptive control strategies in assistive Brain-Machine Interfaces
A large number of people with severe motor disabilities cannot access any of the
available control inputs of current assistive products, which typically rely on residual
motor functions. These patients are therefore unable to fully benefit from existent
assistive technologies, including communication interfaces and assistive robotics. In
this context, electroencephalography-based Brain-Machine Interfaces (BMIs) offer a
potential non-invasive solution to exploit a non-muscular channel for communication
and control of assistive robotic devices, such as a wheelchair, a telepresence
robot, or a neuroprosthesis. Still, non-invasive BMIs currently suffer from limitations,
such as lack of precision, robustness and comfort, which prevent their practical
implementation in assistive technologies.
The goal of this PhD research is to produce scientific and technical developments
to advance the state of the art of assistive interfaces and service robotics based on
BMI paradigms. Two main research paths to the design of effective control strategies
were considered in this project. The first one is the design of hybrid systems, based on
the combination of the BMI together with gaze control, which is a long-lasting motor
function in many paralyzed patients. Such approach allows to increase the degrees
of freedom available for the control. The second approach consists in the inclusion
of adaptive techniques into the BMI design. This allows to transform robotic tools and
devices into active assistants able to co-evolve with the user, and learn new rules of
behavior to solve tasks, rather than passively executing external commands.
Following these strategies, the contributions of this work can be categorized
based on the typology of mental signal exploited for the control. These include:
1) the use of active signals for the development and implementation of hybrid eyetracking
and BMI control policies, for both communication and control of robotic
systems; 2) the exploitation of passive mental processes to increase the adaptability
of an autonomous controller to the user\u2019s intention and psychophysiological state,
in a reinforcement learning framework; 3) the integration of brain active and passive
control signals, to achieve adaptation within the BMI architecture at the level of
feature extraction and classification
- …