187 research outputs found
Shallow convolutional network excel for classifying motor imagery EEG in BCI applications
Many studies applying Brain-Computer Interfaces (BCIs) based on Motor Imagery (MI) tasks for rehabilitation have demonstrated the important role of detecting the Event-Related Desynchronization (ERD) to recognize the user’s motor intention. Nowadays, the development of MI-based BCI approaches without or with very few calibration stages session-by-session for different days or weeks is still an open and emergent scope. In this work, a new scheme is proposed by applying Convolutional Neural Networks (CNN) for MI classification, using an end-to-end Shallow architecture that contains two convolutional layers for temporal and spatial feature extraction. We hypothesize that a BCI designed for capturing event-related desynchronization/synchronization (ERD/ERS) at the CNN input, with an adequate network design, may enhance the MI classification with fewer calibration stages. The proposed system using the same architecture was tested on three public datasets through multiple experiments, including both subject-specific and non-subject-specific training. Comparable and also superior results with respect to the state-of-the-art were obtained. On subjects whose EEG data were never used in the training process, our scheme also achieved promising results with respect to existing non-subject-specific BCIs, which shows greater progress in facilitating clinical applications
Reading Your Own Mind: Dynamic Visualization of Real-Time Neural Signals
Brain Computer Interfaces: BCI) systems which allow humans to control external devices directly from brain activity, are becoming increasingly popular due to dramatic advances in the ability to both capture and interpret brain signals. Further advancing BCI systems is a compelling goal both because of the neurophysiology insights gained from deriving a control signal from brain activity and because of the potential for direct brain control of external devices in applications such as brain injury recovery, human prosthetics, and robotics. The dynamic and adaptive nature of the brain makes it difficult to create classifiers or control systems that will remain effective over time. However it is precisely these qualities that offer the potential to use feedback to build on simple features and create complex control features that are robust over time. This dissertation presents work that addresses these opportunities for the specific case of Electrocorticography: ECoG) recordings from clinical epilepsy patients. First, queued patient tasks were used to explore the predictive nature of both local and global features of the ECoG signal. Second, an algorithm was developed and tested for estimating the most informative features from naive observations of ECoG signal. Third, a software system was built and tested that facilitates real-time visualizations of ECoG signal patients and allows ECoG epilepsy patients to engage in an interactive BCI control feature screening process
Graph Neural Networks on SPD Manifolds for Motor Imagery Classification: A Perspective from the Time-Frequency Analysis
Motor imagery (MI) classification is one of the most widely-concern research
topics in Electroencephalography (EEG)-based brain-computer interfaces (BCIs)
with extensive industry value. The MI-EEG classifiers' tendency has changed
fundamentally over the past twenty years, while classifiers' performance is
gradually increasing. In particular, owing to the need for characterizing
signals' non-Euclidean inherence, the first geometric deep learning (GDL)
framework, Tensor-CSPNet, has recently emerged in the BCI study. In essence,
Tensor-CSPNet is a deep learning-based classifier on the second-order
statistics of EEGs. In contrast to the first-order statistics, using these
second-order statistics is the classical treatment of EEG signals, and the
discriminative information contained in these second-order statistics is
adequate for MI-EEG classification. In this study, we present another GDL
classifier for MI-EEG classification called Graph-CSPNet, using graph-based
techniques to simultaneously characterize the EEG signals in both the time and
frequency domains. It is realized from the perspective of the time-frequency
analysis that profoundly influences signal processing and BCI studies. Contrary
to Tensor-CSPNet, the architecture of Graph-CSPNet is further simplified with
more flexibility to cope with variable time-frequency resolution for signal
segmentation to capture the localized fluctuations. In the experiments,
Graph-CSPNet is evaluated on subject-specific scenarios from two well-used
MI-EEG datasets and produces near-optimal classification accuracies.Comment: 16 pages, 5 figures, 9 Tables; This work has been submitted to the
IEEE for possible publication. Copyright may be transferred without notice,
after which this version may no longer be accessibl
EEG-Based Brain-Computer Interfacing via Motor-Imagery: Practical Implementation and Feature Analysis
The human brain is the most intriguing and complex signal processing unit ever known to us.
A unique characteristic of our brain is its plasticity property, i.e., the ability of neurons to modify
their behavior (structure and functionality) in response to environmental diversity. The plasticity
property of brain has motivated design of brain-computer interfaces (BCI) to develop an alternative
form of communication channel between brain signals and the external world. The BCI systems
have several therapeutic applications of significant importance including but not limited to rehabilitation/
assistive systems, rehabilitation robotics, and neuro-prosthesis control. Despite recent
advancements in BCIs, such systems are still far from being reliably incorporated within humanmachine
inference networks. In this regard, the thesis focuses on Motor Imagery (MI)-based BCI
systems with the objective of tackling some key challenges observed in existing solutions. The
MI is defined as a cognitive process in which a person imagines performing a movement without
peripheral (muscle) activation. At one hand, the thesis focuses on feature extraction, which is
one of the most crucial steps for the development of an effective BCI system. In this regard, the
thesis proposes a subject-specific filtering framework, referred to as the regularized double-band
Bayesian (R-B2B) spectral filtering. The proposed R-B2B framework couples three main feature
extraction categories, namely filter-bank solutions, regularized techniques, and optimized Bayesian mechanisms to enhance the overall classification accuracy of the BCI. To further evaluate the effects
of deploying optimized subject-specific spectra-spatial filters, it is vital to examine and investigate
different aspects of data collection and in particular, effects of the stimuli provided to subjects to
trigger MI tasks. The second main initiative of the thesis is to propose an element of experimental design dealing with MI-based BCI systems. In this regard, we have implemented an EEG-based
BCI system and constructed a benchmark dataset associated with 10 healthy subjects performing
actual movement and MI tasks. To investigate effects of stimulus on the overall achievable performance,
four different protocols are designed and implemented via introduction of visual and voice
stimuli. Finally, the work investigates effects of adaptive trimming of EEG epochs resulting in an
adaptive and subject-specific solution
Recommended from our members
Classifying imaginary vowels from frontal lobe EEG via deep learning
Brain-Computer Interface (BCI) is a promising technology for individuals who suffer from motor or speech disabilities due to the process of decoding brain signals. This thesis uses a dataset for imagined speech to classify vowels based on the neurological areas of the brain. We demonstrate that by using the frontal region of the brain, we obtain higher than 85 percent accuracy using a CNN and LSTM. This accuracy is higher than previous studies that have classified the dataset using the entire brain region. This work shows great promise in using the physiological aspects of the brain associated with specific tasksElectrical and Computer Engineerin
Brain anatomical correlates of perceptual phonological proficiency and language learning aptitude
The present dissertation concerns how brain tissue properties reflect proficiency in two aspects of language use: the ability to use tonal cues on word stems to predict how words will end and the aptitude for learning foreign languages. While it is known that people differ in their language abilities and that damage to brain tissue cause loss of cognitive functions, it is largely unknown if differences in language proficiencies correlate with differences in brain structure. The first two studies examine correlations between cortical morphometry, i.e. the thickness and surface area of the cortex, and the degree of dependency on word accents for processing upcoming suffixes in Swedish native speakers. Word accents in Swedish facilitate speech processing by having predictive associations to specific suffixes, (e.g. fläckaccent1+en ‘spot+singular’, fläckaccent2+ar ‘spot+plural’). This use of word accents, as phonological cues to inflectional suffixes, is relatively unique among the world’s languages. How much a speaker depends on word accents in speech processing can be measured as the difference in response time (RT) between valid and invalid word accent-suffix combinations when asked to identify the inflected form of a word. This can be thought of as a measure of perceptual phonological proficiency in native speakers. Perceptual phonological proficiency is otherwise very difficult to study, as most phonological contrasts are mandatory to properly interpret the meaning of utterances. Study I compares the cortical morphometrical correlates in the planum temporale and inferior frontal gyrus pars opercularis in relation to RT differences in tasks involving real words and pseudowords. We found that thickness of the left planum temporale correlates with perceptual phonological proficiency in lexical words but not pseudowords. This could implicate that word accents are part of full-form representations of familiar words. Moreover, for pseudowords but not lexical words, the thickness of the inferior frontal gyrus pars opercularis correlates with perceptual phonological proficiency. This association could reflect a greater importance for decompositional analysis in which word accents are part of a set of rules listeners need to rely on during processing of novel words. In study II, the investigation of the association between perceptual phonological proficiency in real words with cortical morphometry is expanded to the entire brain. Results show that cortical thickness and surface area of anterior temporal lobe areas, known constituents of a ventral sound-to-meaning language-processing stream is associated with greater perceptual phonological proficiency. This is consistent with a role for word accents in aiding putting together the meaning of or accessing a whole word representation of an inflected word form. Studies III and IV investigate the cortical morphometric associations with language learning aptitude. Findings in study III suggest that aptitude for grammatical inferencing, i.e. the ability to analytically discern the rules of a language, is associated with cortical thickness in the left inferior frontal gyrus pars triangularis. Furthermore, pitch discrimination proficiency, a skill related to language learning ability, correlates negatively with cortical thickness in the right homologue area. Moreover, study IV, using improved imaging techniques, reports on a correlation between vocabulary learning aptitude and cortical surface area in the left inferior precuneus as well as a negative correlation between diffusional axial kurtosis and phonetic memory in the left arcuate fasciculus and subsegment III of the superior longitudinal fasciculus. However, the finding correlation between cortical thickness and grammatical inferencing skill from study III was not replicated in study IV.Taken together, the present dissertation shows that differences in some language proficiencies are associated with regionally thicker or larger cortex and more coherent white matter tracts, the nature and spatial locus of which depend on the proficiency studied. The studies add to our understanding of how language proficiencies are represented in the brain’s anatomy
Models and Analysis of Vocal Emissions for Biomedical Applications
The Models and Analysis of Vocal Emissions with Biomedical Applications (MAVEBA) workshop came into being in 1999 from the particularly felt need of sharing know-how, objectives and results between areas that until then seemed quite distinct such as bioengineering, medicine and singing. MAVEBA deals with all aspects concerning the study of the human voice with applications ranging from the neonate to the adult and elderly. Over the years the initial issues have grown and spread also in other aspects of research such as occupational voice disorders, neurology, rehabilitation, image and video analysis. MAVEBA takes place every two years always in Firenze, Italy
Support vector machines to detect physiological patterns for EEG and EMG-based human-computer interaction:a review
Support vector machines (SVMs) are widely used classifiers for detecting physiological patterns in human-computer interaction (HCI). Their success is due to their versatility, robustness and large availability of free dedicated toolboxes. Frequently in the literature, insufficient details about the SVM implementation and/or parameters selection are reported, making it impossible to reproduce study analysis and results. In order to perform an optimized classification and report a proper description of the results, it is necessary to have a comprehensive critical overview of the applications of SVM. The aim of this paper is to provide a review of the usage of SVM in the determination of brain and muscle patterns for HCI, by focusing on electroencephalography (EEG) and electromyography (EMG) techniques. In particular, an overview of the basic principles of SVM theory is outlined, together with a description of several relevant literature implementations. Furthermore, details concerning reviewed papers are listed in tables and statistics of SVM use in the literature are presented. Suitability of SVM for HCI is discussed and critical comparisons with other classifiers are reported
Recommended from our members
Brain signal recognition using deep learning
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel UniversityBrain Computer Interface (BCI) has the potential to offer a new generation of applications independent of
muscular activity and controlled by the human brain. Brain imaging technologies are used to transfer the
cognitive tasks into control commands for a BCI system. The electroencephalography (EEG) technology
serves as the best available non-invasive solution for extracting signals from the brain. On the other hand,
speech is the primary means of communication, but for patients suffering from locked-in syndrome, there
is no easy way to communicate. Therefore, an ideal communication system for locked-in patients is a
thought-to-speech BCI system.
This research aims to investigate methods for the recognition of imagined speech from EEG signals
using deep learning techniques. In order to design an optimal imagined speech recognition BCI, variety
of issues have been solved. These include 1) proposing new feature extraction and classification
framework for recognition of imagined speech from EEG signals, 2) grammatical class recognition of
imagined words from EEG signals, 3) discriminating different cognitive tasks associated with speech in
the brain such as overt speech, covert speech, and visual imagery. In this work machine learning, deep
learning methods were used to analyze EEG signals.
For recognition of imagined speech from EEG signals, a new EEG database was collected while the
participants mentally spoke (imagined speech) the presented words. Along with imagined speech, EEG
data was recorded for visual imagery (imagining a scene or an image) and overt speech (verbal speech).
Spectro-temporal and spatio-temporal domain features were investigated for the classification of imagined
words from EEG signals. Further, a deep learning framework using the convolutional network
and attention mechanism was implemented for learning features in the spatial, temporal, and spectral
domains. The method achieved a recognition rate of 76.6% for three binary word pairs. These experiments
show that deep learning algorithms are ideal for imagined speech recognition from EEG signals
due to their ability to interpret features from non-linear and non-stationary signals. Grammatical classes
of imagined words from EEG signals were also recognized using a multi-channel convolution network
framework. This method was extended to a multi-level recognition system for multi-class classification
of imagined words which achieved an accuracy of 52.9% for 10 words, which is much better in
comparison to previous work.
In order to investigate the difference between imagined speech with verbal speech and visual imagery
from EEG signals, we used multivariate pattern analysis (MVPA). MVPA provided the time segments
when the neural oscillation for the different cognitive tasks was linearly separable. Further, frequencies
that result in most discrimination between the different cognitive tasks were also explored. A framework
was proposed to discriminate two cognitive tasks based on the spatio-temporal patterns in EEG signals.
The proposed method used the K-means clustering algorithm to find the best electrode combination and
convolutional-attention network for feature extraction and classification. The proposed method achieved
a high recognition rate of 82.9% and 77.7%.
The results in this research suggest that a communication based BCI system can be designed using
deep learning methods. Further, this work add knowledge to the existing work in the field of communication
based BCI system
- …