225 research outputs found
Data-driven multivariate and multiscale methods for brain computer interface
This thesis focuses on the development of data-driven multivariate and multiscale methods
for brain computer interface (BCI) systems. The electroencephalogram (EEG), the
most convenient means to measure neurophysiological activity due to its noninvasive nature,
is mainly considered. The nonlinearity and nonstationarity inherent in EEG and its
multichannel recording nature require a new set of data-driven multivariate techniques to
estimate more accurately features for enhanced BCI operation. Also, a long term goal
is to enable an alternative EEG recording strategy for achieving long-term and portable
monitoring.
Empirical mode decomposition (EMD) and local mean decomposition (LMD), fully
data-driven adaptive tools, are considered to decompose the nonlinear and nonstationary
EEG signal into a set of components which are highly localised in time and frequency. It
is shown that the complex and multivariate extensions of EMD, which can exploit common
oscillatory modes within multivariate (multichannel) data, can be used to accurately
estimate and compare the amplitude and phase information among multiple sources, a
key for the feature extraction of BCI system. A complex extension of local mean decomposition
is also introduced and its operation is illustrated on two channel neuronal
spike streams. Common spatial pattern (CSP), a standard feature extraction technique
for BCI application, is also extended to complex domain using the augmented complex
statistics. Depending on the circularity/noncircularity of a complex signal, one of the
complex CSP algorithms can be chosen to produce the best classification performance
between two different EEG classes.
Using these complex and multivariate algorithms, two cognitive brain studies are
investigated for more natural and intuitive design of advanced BCI systems. Firstly, a Yarbus-style auditory selective attention experiment is introduced to measure the user
attention to a sound source among a mixture of sound stimuli, which is aimed at improving
the usefulness of hearing instruments such as hearing aid. Secondly, emotion experiments
elicited by taste and taste recall are examined to determine the pleasure and displeasure
of a food for the implementation of affective computing. The separation between two
emotional responses is examined using real and complex-valued common spatial pattern
methods.
Finally, we introduce a novel approach to brain monitoring based on EEG recordings
from within the ear canal, embedded on a custom made hearing aid earplug. The new
platform promises the possibility of both short- and long-term continuous use for standard
brain monitoring and interfacing applications
Towards Real-World BCI: CCSPNet, A Compact Subject-Independent Motor Imagery Framework
A conventional subject-dependent (SD) brain-computer interface (BCI) requires
a complete data-gathering, training, and calibration phase for each user before
it can be used. In recent years, a number of subject-independent (SI) BCIs have
been developed. However, there are many problems preventing them from being
used in real-world BCI applications. A weaker performance compared to the
subject-dependent (SD) approach, and a relatively large model requiring high
computational power are the most important ones. Therefore, a potential
real-world BCI would greatly benefit from a compact low-power
subject-independent BCI framework, ready to be used immediately after the user
puts it on. To move towards this goal, we propose a novel subject-independent
BCI framework named CCSPNet (Convolutional Common Spatial Pattern Network)
trained on the motor imagery (MI) paradigm of a large-scale
electroencephalography (EEG) signals database consisting of 21600 trials for 54
subjects performing two-class hand-movement MI tasks. The proposed framework
applies a wavelet kernel convolutional neural network (WKCNN) and a temporal
convolutional neural network (TCNN) in order to represent and extract the
diverse spectral features of EEG signals. The outputs of the convolutional
layers go through a common spatial pattern (CSP) algorithm for spatial feature
extraction. The number of CSP features is reduced by a dense neural network,
and the final class label is determined by a linear discriminative analysis
(LDA) classifier. The CCSPNet framework evaluation results show that it is
possible to have a low-power compact BCI that achieves both SD and SI
performance comparable to complex and computationally expensive.Comment: 15 pages, 6 figures, 6 tables, 1 algorith
EEG-based brain-computer interfaces using motor-imagery: techniques and challenges.
Electroencephalography (EEG)-based brain-computer interfaces (BCIs), particularly those using motor-imagery (MI) data, have the potential to become groundbreaking technologies in both clinical and entertainment settings. MI data is generated when a subject imagines the movement of a limb. This paper reviews state-of-the-art signal processing techniques for MI EEG-based BCIs, with a particular focus on the feature extraction, feature selection and classification techniques used. It also summarizes the main applications of EEG-based BCIs, particularly those based on MI data, and finally presents a detailed discussion of the most prevalent challenges impeding the development and commercialization of EEG-based BCIs
Optimizing Common Spatial Pattern for a Motor Imagerybased BCI by Eigenvector Filteration
One of the fundamental criterion for the successful application of a brain-computer interface (BCI) system is to extract significant features that confine invariant characteristics specific to each brain state. Distinct features play an important role in enabling a computer to associate different electroencephalogram (EEG) signals to different brain states. To ease the workload on the feature extractor and enhance separability between different brain states, the data is often transformed or filtered to maximize separability before feature extraction. The common spatial patterns (CSP) approach can achieve this by linearly projecting the multichannel EEG data into a surrogate data space by the weighted summation of the appropriate channels. However, choosing the optimal spatial filters is very significant in the projection of the data and this has a direct impact on classification. This paper presents an optimized pattern selection method from the CSP filter for improved classification accuracy. Based on the hypothesis that values closer to zero in the CSP filter introduce noise rather than useful information, the CSP filter is modified by analyzing the CSP filter and removing/filtering the degradative or insignificant values from the filter. This hypothesis is tested by comparing the BCI results of eight subjects using the conventional CSP filters and the optimized CSP filter. In majority of the cases the latter produces better performance in terms of the overall classification accuracy
Optimizing Common Spatial Pattern for a Motor Imagerybased BCI by Eigenvector Filteration
One of the fundamental criterion for the successful application of a brain-computer interface (BCI) system is to extract significant features that confine invariant characteristics specific to each brain state. Distinct features play an important role in enabling a computer to associate different electroencephalogram (EEG) signals to different brain states. To ease the workload on the feature extractor and enhance separability between different brain states, the data is often transformed or filtered to maximize separability before feature extraction. The common spatial patterns (CSP) approach can achieve this by linearly projecting the multichannel EEG data into a surrogate data space by the weighted summation of the appropriate channels. However, choosing the optimal spatial filters is very significant in the projection of the data and this has a direct impact on classification. This paper presents an optimized pattern selection method from the CSP filter for improved classification accuracy. Based on the hypothesis that values closer to zero in the CSP filter introduce noise rather than useful information, the CSP filter is modified by analyzing the CSP filter and removing/filtering the degradative or insignificant values from the filter. This hypothesis is tested by comparing the BCI results of eight subjects using the conventional CSP filters and the optimized CSP filter. In majority of the cases the latter produces better performance in terms of the overall classification accuracy
Toward an Imagined Speech-Based Brain Computer Interface Using EEG Signals
Individuals with physical disabilities face difficulties in communication. A number of neuromuscular impairments could limit people from using available communication aids,
because such aids require some degree of muscle movement. This makes brain–computer interfaces (BCIs) a potentially promising alternative communication technology for
these people. Electroencephalographic (EEG) signals are commonly used in BCI systems to capture non-invasively the neural representations of intended, internal and
imagined activities that are not physically or verbally evident. Examples include motor and speech imagery activities.
Since 2006, researchers have become increasingly interested in classifying different types of imagined speech from EEG signals. However, the field still has a limited
understanding of several issues, including experiment design, stimulus type, training, calibration and the examined features. The main aim of the research in this thesis is to advance automatic recognition of imagined speech using EEG signals by addressing
a variety of issues that have not been solved in previous studies. These include (1)improving the discrimination between imagined speech versus non-speech tasks, (2)
examining temporal parameters to optimise the recognition of imagined words and (3) providing a new feature extraction framework for improving EEG-based imagined
speech recognition by considering temporal information after reducing within-session temporal non-stationarities.
For the discrimination of speech versus non-speech, EEG data was collected during the imagination of randomly presented and semantically varying words. The non-speech
tasks involved attention to visual stimuli and resting. Time-domain and spatio-spectral features were examined in different time intervals. Above-chance-level classification
accuracies were achieved for each word and for groups of words compared to the non-speech tasks.
To classify imagined words, EEG data related to the imagination of five words was collected. In addition to words classification, the impacts of experimental parameters
on classification accuracy were examined. The optimization of these parameters is important to improve the rate and speed of recognizing unspoken speech in on-line
applications. These parameters included using different training sizes, classification algorithms, feature extraction in different time intervals and the use of imagination time length as classification feature. Our extensive results showed that Random Forest classifier with features extracted using Discrete Wavelet Transform from 4 seconds fixed time frame EEG yielded that highest average classification of 87.93% in classification of five imagined words.
To minimise within class temporal variations, a novel feature extraction framework based on dynamic time warping (DTW) was developed. Using linear discriminant
analysis as the classifier, the proposed framework yielded an average 72.02% accuracy in the classification of imagined speech versus silence and 52.5% accuracy in the classification of five words. These results significantly outperformed a baseline configuration of state-of-the art time-domain features
An Unsupervised Deep-Transfer-Learning-Based Motor Imagery EEG Classification Scheme for Brain-Computer Interface
Brain–computer interface (BCI) research has attracted worldwide attention and has been rapidly developed. As one well-known non-invasive BCI technique, electroencephalography (EEG) records the brain’s electrical signals from the scalp surface area. However, due to the non-stationary nature of the EEG signal, the distribution of the data collected at different times or from different subjects may be different. These problems affect the performance of the BCI system and limit the scope of its practical application. In this study, an unsupervised deep-transfer-learning-based method was proposed to deal with the current limitations of BCI systems by applying the idea of transfer learning to the classification of motor imagery EEG signals. The Euclidean space data alignment (EA) approach was adopted to align the covariance matrix of source and target domain EEG data in Euclidean space. Then, the common spatial pattern (CSP) was used to extract features from the aligned data matrix, and the deep convolutional neural network (CNN) was applied for EEG classification. The effectiveness of the proposed method has been verified through the experiment results based on public EEG datasets by comparing with the other four methods
- …