25 research outputs found

    Analysis of Human Gait Using Hybrid EEG-fNIRS-Based BCI System: A Review

    Get PDF
    Human gait is a complex activity that requires high coordination between the central nervous system, the limb, and the musculoskeletal system. More research is needed to understand the latter coordination\u27s complexity in designing better and more effective rehabilitation strategies for gait disorders. Electroencephalogram (EEG) and functional near-infrared spectroscopy (fNIRS) are among the most used technologies for monitoring brain activities due to portability, non-invasiveness, and relatively low cost compared to others. Fusing EEG and fNIRS is a well-known and established methodology proven to enhance brain–computer interface (BCI) performance in terms of classification accuracy, number of control commands, and response time. Although there has been significant research exploring hybrid BCI (hBCI) involving both EEG and fNIRS for different types of tasks and human activities, human gait remains still underinvestigated. In this article, we aim to shed light on the recent development in the analysis of human gait using a hybrid EEG-fNIRS-based BCI system. The current review has followed guidelines of preferred reporting items for systematic reviews and meta-Analyses (PRISMA) during the data collection and selection phase. In this review, we put a particular focus on the commonly used signal processing and machine learning algorithms, as well as survey the potential applications of gait analysis. We distill some of the critical findings of this survey as follows. First, hardware specifications and experimental paradigms should be carefully considered because of their direct impact on the quality of gait assessment. Second, since both modalities, EEG and fNIRS, are sensitive to motion artifacts, instrumental, and physiological noises, there is a quest for more robust and sophisticated signal processing algorithms. Third, hybrid temporal and spatial features, obtained by virtue of fusing EEG and fNIRS and associated with cortical activation, can help better identify the correlation between brain activation and gait. In conclusion, hBCI (EEG + fNIRS) system is not yet much explored for the lower limb due to its complexity compared to the higher limb. Existing BCI systems for gait monitoring tend to only focus on one modality. We foresee a vast potential in adopting hBCI in gait analysis. Imminent technical breakthroughs are expected using hybrid EEG-fNIRS-based BCI for gait to control assistive devices and Monitor neuro-plasticity in neuro-rehabilitation. However, although those hybrid systems perform well in a controlled experimental environment when it comes to adopting them as a certified medical device in real-life clinical applications, there is still a long way to go

    Advancing Pattern Recognition Techniques for Brain-Computer Interfaces: Optimizing Discriminability, Compactness, and Robustness

    Get PDF
    In dieser Dissertation formulieren wir drei zentrale Zielkriterien zur systematischen Weiterentwicklung der Mustererkennung moderner Brain-Computer Interfaces (BCIs). Darauf aufbauend wird ein Rahmenwerk zur Mustererkennung von BCIs entwickelt, das die drei Zielkriterien durch einen neuen Optimierungsalgorithmus vereint. Darüber hinaus zeigen wir die erfolgreiche Umsetzung unseres Ansatzes für zwei innovative BCI Paradigmen, für die es bisher keine etablierte Mustererkennungsmethodik gibt

    Signal Processing Combined with Machine Learning for Biomedical Applications

    Get PDF
    The Master’s thesis is comprised of four projects in the realm of machine learning and signal processing. The abstract of the thesis is divided into four parts and presented as follows, Abstract 1: A Kullback-Leibler Divergence-Based Predictor for Inter-Subject Associative BCI. Inherent inter-subject variability in sensorimotor brain dynamics hinders the transferability of brain-computer interface (BCI) model parameters across subjects. An individual training session is essential for effective BCI control to compensate for variability. We report a Kullback-Leibler Divergence (KLD)-based predictor for inter-subject associative BCI. An online dataset comprising left/right hand, both feet, and tongue motor imagery tasks was used to show correlation between the proposed inter-subject predictor and BCI performance. Linear regression between the KLD predictor and BCI performance showed a strong inverse correlation (r = -0.62). The KLD predictor can act as an indicator for generalized inter-subject associative BCI designs. Abstract 2: Multiclass Sensorimotor BCI Based on Simultaneous EEG and fNIRS. Hybrid BCI (hBCI) utilizes multiple data modalities to acquire brain signals during motor execution (ME) tasks. Studies have shown significant enhancements in the classification of binary class ME-hBCIs; however, four-class ME-hBCI classification is yet to be done using multiclass algorithms. We present a quad-class classification of ME-hBCI tasks from simultaneous EEG-fNIRS recordings. Appropriate features were extracted from EEG-fNIRS signals and combined for hybrid features and classified with support vector machine. Results showed a significant increase in hybrid accuracy over single modalities and show hybrid method’s performance enhancement capability. Abstract 3: Deep Learning for Improved Inter-Subject EEG-fNIRS Hybrid BCI Performance. Multimodality based hybrid BCI has become famous for performance improvement; however, the inherent inter-subject and inter-session variation between participants brain dynamics poses obstacles in achieving high performance. This work presents an inter-subject hBCI to classify right/left-hand MI tasks from simultaneous EEG-fNIRS recordings of 29 healthy subjects. State-of-art features were extracted from EEG-fNIRS signals and combined for hybrid features, and finally, classified using deep Long short-term memory classifier. Results showed an increase in the inter-subject performance for the hybrid system while making the system more robust to brain dynamics change and hints to the feasibility of EEG-fNIRS based inter-subject hBCI. Abstract 4: Microwave Based Glucose Concentration Classification by Machine Learning. Non-invasive blood sugar measurement attracts increased attention in recent years, given the increase in diabetes-related complications and inconvenience in the traditional ways using blood. This work utilized machine learning (ML) algorithms to classify glucose concentration (GC) from the measured broadband microwave scattering signals (S11). An N-type microwave adapter pair was utilized to measure the sweeping frequency scattering-parameter (S-parameter) of the glucose solutions with GC varying from 50-10,000 dg/dL. Dielectric parameters were retrieved from the measured wideband complex S-parameters based on the modified Debye dielectric dispersion model. Results indicate that the best algorithm can achieve a perfect classification accuracy and suggests an alternate way to develop a GC detection method using ML algorithms

    EEG source imaging for improved control BCI performance

    Get PDF

    Brain Computer Interfaces and Emotional Involvement: Theory, Research, and Applications

    Get PDF
    This reprint is dedicated to the study of brain activity related to emotional and attentional involvement as measured by Brain–computer interface (BCI) systems designed for different purposes. A BCI system can translate brain signals (e.g., electric or hemodynamic brain activity indicators) into a command to execute an action in the BCI application (e.g., a wheelchair, the cursor on the screen, a spelling device or a game). These tools have the advantage of having real-time access to the ongoing brain activity of the individual, which can provide insight into the user’s emotional and attentional states by training a classification algorithm to recognize mental states. The success of BCI systems in contemporary neuroscientific research relies on the fact that they allow one to “think outside the lab”. The integration of technological solutions, artificial intelligence and cognitive science allowed and will allow researchers to envision more and more applications for the future. The clinical and everyday uses are described with the aim to invite readers to open their minds to imagine potential further developments

    Adaptive Cognitive Interaction Systems

    Get PDF
    Adaptive kognitive Interaktionssysteme beobachten und modellieren den Zustand ihres Benutzers und passen das Systemverhalten entsprechend an. Ein solches System besteht aus drei Komponenten: Dem empirischen kognitiven Modell, dem komputationalen kognitiven Modell und dem adaptiven Interaktionsmanager. Die vorliegende Arbeit enthält zahlreiche Beiträge zur Entwicklung dieser Komponenten sowie zu deren Kombination. Die Ergebnisse werden in zahlreichen Benutzerstudien validiert

    Electroencephalogram Signal Processing For Hybrid Brain Computer Interface Systems

    Get PDF
    The goal of this research was to evaluate and compare three types of brain computer interface (BCI) systems, P300, steady state visually evoked potentials (SSVEP) and Hybrid as virtual spelling paradigms. Hybrid BCI is an innovative approach to combine the P300 and SSVEP. However, it is challenging to process the resulting hybrid signals to extract both information simultaneously and effectively. The major step executed toward the advancement to modern BCI system was to move the BCI techniques from traditional LED system to electronic LCD monitor. Such a transition allows not only to develop the graphics of interest but also to generate objects flickering at different frequencies. There were pilot experiments performed for designing and tuning the parameters of the spelling paradigms including peak detection for different range of frequencies of SSVEP BCI, placement of objects on LCD monitor, design of the spelling keyboard, and window time for the SSVEP peak detection processing. All the experiments were devised to evaluate the performance in terms of the spelling accuracy, region error, and adjacency error among all of the paradigms: P300, SSVEP and Hybrid. Due to the different nature of P300 and SSVEP, designing a hybrid P300-SSVEP signal processing scheme demands significant amount of research work in this area. Eventually, two critical questions in hybrid BCl are: (1) which signal processing strategy can best measure the user\u27s intent and (2) what a suitable paradigm is to fuse these two techniques in a simple but effective way. In order to answer these questions, this project focused mainly on developing signal processing and classification technique for hybrid BCI. Hybrid BCI was implemented by extracting the specific information from brain signals, selecting optimum features which contain maximum discrimination information about the speller characters of our interest and by efficiently classifying the hybrid signals. The designed spellers were developed with the aim to improve quality of life of patients with disability by utilizing visually controlled BCI paradigms. The paradigms consist of electrodes to record electroencephalogram signal (EEG) during stimulation, a software to analyze the collected data, and a computing device where the subject’s EEG is the input to estimate the spelled character. Signal processing phase included preliminary tasks as preprocessing, feature extraction, and feature selection. Captured EEG data are usually a superposition of the signals of interest with other unwanted signals from muscles, and from non-biological artifacts. The accuracy of each trial and average accuracy for subjects were computed. Overall, the average accuracy of the P300 and SSVEP spelling paradigm was 84% and 68.5 %. P300 spelling paradigms have better accuracy than both the SSVEP and hybrid paradigm. Hybrid paradigm has the average accuracy of 79 %. However, hybrid system is faster in time and more soothing to look than other paradigms. This work is significant because it has great potential for improving the BCI research in design and application of clinically suitable speller paradigm

    Wearable brain computer interfaces with near infrared spectroscopy

    Full text link
    Brain computer interfaces (BCIs) are devices capable of relaying information directly from the brain to a digital device. BCIs have been proposed for a diverse range of clinical and commercial applications; for example, to allow paralyzed subjects to communicate, or to improve machine human interactions. At their core, BCIs need to predict the current state of the brain from variables measuring functional physiology. Functional near infrared spectroscopy (fNIRS) is a non-invasive optical technology able to measure hemodynamic changes in the brain. Along with electroencephalography (EEG), fNIRS is the only technique that allows non-invasive and portable sensing of brain signals. Portability and wearability are very desirable characteristics for BCIs, as they allow them to be used in contexts beyond the laboratory, extending their usability for clinical and commercial applications, as well as for ecologically valid research. Unfortunately, due to limited access to the brain, non-invasive BCIs tend to suffer from low accuracy in their estimation of the brain state. It has been suggested that feedback could increase BCI accuracy as the brain normally relies on sensory feedback to adjust its strategies. Despite this, presenting relevant and accurate feedback in a timely manner can be challenging when processing fNIRS signals, as they tend to be contaminated by physiological and motion artifacts. In this dissertation, I present the hardware and software solutions we proposed and developed to deal with these challenges. First, I will talk about ninjaNIRS, the wearable open source fNIRS device we developed in our laboratory, which could help fNIRS neuroscience and BCIs to become more accessible. Next, I will present an adaptive filter strategy to recover the neural responses from fNIRS signals in real-time, which could be used for feedback and classification in a BCI paradigm. We showed that our wearable fNIRS device can operate autonomously for up to three hours and can be easily carried in a backpack, while offering noise equivalent power comparable to commercial devices. Our adaptive multimodal Kalman filter strategy provided a six-fold increase in contrast to noise ratio of the brain signals compared to standard filtering while being able to process at least 24 channels at 400 samples per second using a standard computer. This filtering strategy, along with visual feedback during a left vs right motion imagery task, showed a relative increase of accuracy of 37.5% compared to not using feedback. With this, we show that it is possible to present relevant feedback for fNIRS BCI in real-time. The findings on this dissertation might help improve the design of future fNIRS BCIs, and thus increase the usability and reliability of this technology

    Menetelmiä MEG:hen ja liikkeen kuvitteluun perustuviin aivokäyttöliittymiin

    Get PDF
    Brain–computer interfaces (BCI) are systems that translate the user's brain activity into commands for external devices in real time. Magnetoencephalography (MEG) measures electromagnetic brain activity noninvasively and can be used in BCIs. The aim of this thesis was to develop an MEG-based BCI for decoding hand motor imagery. The BCI could eventually serve as a therapeutic method for patients recovering from e.g.cerebral stroke. Here, we validated machine-learning methods for decoding motor imagery (MI)-related brain activity with healthy subjects' MEG measurements. In addition, we studied the effect of different BCI feedback modalities on the subjects' brain function related to MI.In Study I, we compared feature extraction methods for classifying left- vs right-hand MI, and MI vs rest. We found that spatial filtering and further extraction of bandpower features yielded better classification accuracy than time–frequency features extracted from MEG channels above the parietal area. Furthermore, prior spatial filtering improved the discrimination capability of time–frequency features.The training data for a BCI are typically collected in the beginning of each measurement session. However, as this can be time-consuming and exhausting for patients, data from other subjects' measurements could be used for training as well. In Study II, methods for across-subject classification of MI were compared. The results showed that a classifier based on multi-task learning with a l2,1-norm regularized logistic regression was the best method for across-subject decoding for both MEG and electroencephalography (EEG). In Study II, we also compared the decoding results of simultaneously measured EEG and MEG data, and investigated whether MEG responses to passive hand movements could be used to train a classifier to detect MI. MEG yielded slightly better results than EEG. Training the classifiers with the subject's own or other subjects' passive movements did not result in high accuracy. Passive movements should thus not be used for calibrating an MI-BCI.In Study III, we investigated how the amplitude of sensorimotor rhythms (SMR) changes while the subjects practise hand MI with a BCI. We compared the effect of visual and proprioceptive feedback on brain functional changes during a single measurement session. In subjects receiving proprioceptive feedback, the power of SMR increased linearly over the session in motor cortical regions, while similar effect was not observed in subjects receiving purely visual feedback. According to these results, proprioceptive feedback should be preferred over visual feedback especially in BCIs aiming at recovery of hand functions.The methods presented in this thesis are suitable for an MEG-based BCI. The decoding results can be used as a benchmark when developing classifiers specifically for MI-related MEG data.Aivokäyttöliittymien avulla voidaan ohjata ulkoisia laitteita käyttäen aivoista mitattuja signaaleja. Magnetoenkefalografia (MEG) mittaa aivojen toimintaa kajoamattomasti ja sitä voidaan käyttää myös aivokäyttöliittymissä. Väitöskirjan tavoitteena oli kehittää käden liikkeen kuvittelua luokitteleva MEG-aivokäyttöliittymä, jota voidaan myöhemmin käyttää aivoinfarktipotilaiden kuntoutukseen. Työssä validoitiin terveiden koehenkilöiden MEG-mittausten perusteella koneoppimismenetelmiä aivokäyttöliittymiin sekä tutkittiin, miten eri palautemodaliteetit vaikuttavat aivotoimintaan koehenkilöiden opetellessa käyttämään aivokäyttöliittymää.Ensimmäisessä osatyössä vertailtiin piirteenirrotusmenetelmiä, joita käytetään erottamaan toisistaan vasemman ja oikean käden kuvitteluun sekä liikkeen kuvitteluun ja lepotilaan liittyvät MEG-signaalit. Spatiaalisesti suodatettujen signaalien teho luokittelupiirteenä tuotti parempia luokittelutarkkuuksia kuin parietaalisista MEG-kanavista mitatut aika-taajuuspiirteet. Edeltävä spatiaalinen suodatus paransi myös aika-taajuuspiirteiden erottelukykyä luokittelutehtävissä.Aivokäyttöliittymän opetusdata kerätään yleensä kunkin mittauskerran alussa. Koska tämä voi olla hidasta ja uuvuttavaa potilaille, opetusdatana voidaan käyttää myös muilta henkilöiltä mitattuja signaaleja. Toisessa osatyössä vertailtiin koehenkilöiden väliseen luokitteluun soveltuvia menetelmiä. Monitehtäväoppimista ja l2,1-regularisoitua logistista regressiota käyttävä luokittelija soveltui tähän parhaiten.Toisessa osatyössä vertailtiin myös MEG:n ja elektroenkefalografian (EEG) tuottamia luokittelutuloksia, sekä tutkittiin voidaanko passiivisten käden liikkeiden aiheuttamia MEG-vasteita käyttää liikkeen kuvittelua tunnistavien luokittelijoiden opetukseen. MEG tuotti hieman parempia tuloksia kuin EEG. Luokittelijoiden opetus koehenkilöiden omilla tai muiden koehenkilöiden passiiviliikkeillä ei tuottanut hyviä luokittelutuloksia.Passiiviliikkeitä ei siis tulisi käyttää liikkeen kuvittelua tunnistavan aivo-käyttöliittymän kalibrointiin.Kolmannessa osatyössä tutkittiin, miten sensorimotoristen rytmien (SMR) amplitudi muuttuu koehenkilöiden harjoitellessa käden liikkeiden kuvittelua aivokäyttöliittymän avulla. Työssä vertailtiin visuaalisen ja proprioseptiivisen palautteen aiheuttamia SMR:n muutoksia yhden harjoituskerran aikana. Proprioseptiivista palautetta saaneilla koehenkilöillä SMR:n teho kasvoi harjoittelun aikana lineaarisesti liikkeitä koordinoivilla aivoalueilla. Visuaalista palautetta saaneilla tätä ilmiötä ei havaittu. Propriosep-tiivista palautetta tulisi siten käyttää visuaalisen sijaan erityisesti käden liikkeiden kuntoutukseen tähtäävissä aivokäyttöliittymissä.Esitettyjä menetelmiä voidaan käyttää MEG:hen perustuvissa aivokäyttöliittymissä. Luokittelutuloksia voidaan käyttää vertailukohtana kehitettäessä liikkeen kuvitteluun liittyvän MEG-datan luokittelijoita

    Multimodal approach for pilot mental state detection based on EEG

    Get PDF
    The safety of flight operations depends on the cognitive abilities of pilots. In recent years, there has been growing concern about potential accidents caused by the declining mental states of pilots. We have developed a novel multimodal approach for mental state detection in pilots using electroencephalography (EEG) signals. Our approach includes an advanced automated preprocessing pipeline to remove artefacts from the EEG data, a feature extraction method based on Riemannian geometry analysis of the cleaned EEG data, and a hybrid ensemble learning technique that combines the results of several machine learning classifiers. The proposed approach provides improved accuracy compared to existing methods, achieving an accuracy of 86% when tested on cleaned EEG data. The EEG dataset was collected from 18 pilots who participated in flight experiments and publicly released at NASA’s open portal. This study presents a reliable and efficient solution for detecting mental states in pilots and highlights the potential of EEG signals and ensemble learning algorithms in developing cognitive cockpit systems. The use of an automated preprocessing pipeline, feature extraction method based on Riemannian geometry analysis, and hybrid ensemble learning technique set this work apart from previous efforts in the field and demonstrates the innovative nature of the proposed approach
    corecore