19 research outputs found

    A Hybrid Brain-Computer Interface Based on Electroencephalography and Functional Transcranial Doppler Ultrasound

    Get PDF
    Hybrid brain computer interfaces (BCIs) combining multiple brain imaging modalities have been proposed recently to boost the performance of single modality BCIs. We advance the state of hybrid BCIs by introducing a novel system that measures electrical brain activity as well as cerebral blood flow velocity using Electroencephalography (EEG) and functional transcranial Doppler ultrasound (fTCD), respectively. The system we developed employs two different paradigms to induce changes simultaneously in EEG and fTCD and to infer user intent. One of these paradigms includes visual stimuli to simultaneously induce steady state visually evoked potentials (SSVEPs) and instructs users to perform word generation (WG) and mental rotation (MR) tasks, while the other paradigm instructs users to perform left and right arm motor imagery (MI) tasks through visual stimuli. To improve accuracy and information transfer rate (ITR) of the proposed system compared to those obtained through our preliminary analysis, using classical feature extraction approaches, we mainly contribute to multi-modal fusion of EEG and fTCD features. Specifically, we proposed a probabilistic fusion of EEG and fTCD evidences instead of simple concatenation of EEG and fTCD feature vectors that we performed in our preliminary analysis. Experimental results showed that the MI paradigm outperformed the MR/WG one in terms of both accuracy and ITR. In particular, 93.85%, 93.71%, and 100% average accuracies and 19.89, 26.55, and 40.83 bits/min v average ITRs were achieved for right MI vs baseline, left MI versus baseline, and right MI versus left MI, respectively. Moreover, for both paradigms, the EEG-fTCD BCI with the proposed analysis techniques outperformed all EEG- fNIRS BCIs in terms of accuracy and ITR. In addition, to investigate the feasibility of increasing the possible number of BCI commands, we extended our approaches to solve the 3-class problems for both paradigms. It was found that the MI paradigm outperformed the MR/WG paradigm and achieved 96.58% average accuracy and 45 bits/min average ITR. Finally, we introduced a transfer learning approach to reduce the calibration requirements of the proposed BCI. This approach was found to be very efficient especially with the MI paradigm as it reduced the calibration requirements by at least 60.43%

    Transfer learning for a multimodal hybrid EEG-FTCD Brain-Computer Interface

    Get PDF
    Transfer learning has been used to overcome the limitations of machine learning in Brain-Computer Interface (BCI) applications. Transfer learning aims to provide higher performance than no-transfer machine learning when only a limited number of training data is available and can consequently reduce training and calibration requirements. BCI systems are designed to provide communication and control tools for individuals with limited speech and physical abilities (LSPA). Most noninvasive BCI systems are based on Electroencephalogram (EEG) because of EEG \textquotesingle s cost effectiveness and portability. However, EEG signals present low signal-to-noise ratio and nonstationarity due to background brain activity. Such a behavior may decrease the global performance of the system. To overcome the disadvantages of EEG signals, in our previous work, we developed two different multi-modal BCI systems based on EEG and functional transcranial Doppler (fTCD), a cerebral flood velocity measure. These two multi-modal systems that combine EEG and fTCD signals aim to reduce performance degradation obtained when EEG was the only BCI modality. One of the systems is based on steady state evoked potentials and the other one is designed using motor imagery paradigms. Our results have shown that such a hybrid system outperforms EEG only BCIs. However, both systems require significant amount of training data for personalized design which could be tiresome for the target population. In this study, we extend these systems by performing a new transfer learning algorithm and we demonstrate the corresponding algorithm on the three different binary classification tasks for both BCIs in order to reduce the calibration requirements. Performing experiments with healthy participants, we collected EEG and fTCD data using both BCI systems. In order to apply transfer learning and to reduce the calibration requirements for BCIs, for each participant, we identify the most informative datasets from the rest of the participants based on probabilistic similarities between the class conditional distributions and increase the training set from this data. We demonstrate that transfer learning reduces the calibration requirements up to \%87.5 for BCI systems. Also, through comparison between different classifiers LDA, QDA, and SVM, we observe that QDA achieves the higher difference between transfer learning and no transfer accuracy

    The status of textile-based dry EEG electrodes

    Get PDF
    Electroencephalogram (EEG) is the biopotential recording of electrical signals generated by brain activity. It is useful for monitoring sleep quality and alertness, clinical applications, diagnosis, and treatment of patients with epilepsy, disease of Parkinson and other neurological disorders, as well as continuous monitoring of tiredness/ alertness in the field. We provide a review of textile-based EEG. Most of the developed textile-based EEGs remain on shelves only as published research results due to a limitation of flexibility, stickability, and washability, although the respective authors of the works reported that signals were obtained comparable to standard EEG. In addition, nearly all published works were not quantitatively compared and contrasted with conventional wet electrodes to prove feasibility for the actual application. This scenario would probably continue to give a publication credit, but does not add to the growth of the specific field, unless otherwise new integration approaches and new conductive polymer composites are evolved to make the application of textile-based EEG happen for bio-potential monitoring

    Brain-computer interface of focus and motor imagery using wavelet and recurrent neural networks

    Get PDF
    Brain-computer interface is a technology that allows operating a device without involving muscles and sound, but directly from the brain through the processed electrical signals. The technology works by capturing electrical or magnetic signals from the brain, which are then processed to obtain information contained therein. Usually, BCI uses information from electroencephalogram (EEG) signals based on various variables reviewed. This study proposed BCI to move external devices such as a drone simulator based on EEG signal information. From the EEG signal was extracted to get motor imagery (MI) and focus variable using wavelet. Then, they were classified by recurrent neural networks (RNN). In overcoming the problem of vanishing memory from RNN, was used long short-term memory (LSTM). The results showed that BCI used wavelet, and RNN can drive external devices of non-training data with an accuracy of 79.6%. The experiment gave AdaDelta model is better than the Adam model in terms of accuracy and value losses. Whereas in computational learning time, Adam's model is faster than AdaDelta's model

    Non-Invasive Brain-to-Brain Interface (BBI): Establishing Functional Links between Two Brains

    Get PDF
    Transcranial focused ultrasound (FUS) is capable of modulating the neural activity of specific brain regions, with a potential role as a non-invasive computer-to-brain interface (CBI). In conjunction with the use of brain-to-computer interface (BCI) techniques that translate brain function to generate computer commands, we investigated the feasibility of using the FUS-based CBI to non-invasively establish a functional link between the brains of different species (i.e. human and Sprague-Dawley rat), thus creating a brain-to-brain interface (BBI). The implementation was aimed to non-invasively translate the human volunteer's intention to stimulate a rat's brain motor area that is responsible for the tail movement. The volunteer initiated the intention by looking at a strobe light flicker on a computer display, and the degree of synchronization in the electroencephalographic steady-state-visual-evoked-potentials (SSVEP) with respect to the strobe frequency was analyzed using a computer. Increased signal amplitude in the SSVEP, indicating the volunteer's intention, triggered the delivery of a burst-mode FUS (350 kHz ultrasound frequency, tone burst duration of 0.5 ms, pulse repetition frequency of 1 kHz, given for 300 msec duration) to excite the motor area of an anesthetized rat transcranially. The successful excitation subsequently elicited the tail movement, which was detected by a motion sensor. The interface was achieved at 94.0 +/- 3.0% accuracy, with a time delay of 1.59 +/- 1.07 sec from the thought-initiation to the creation of the tail movement. Our results demonstrate the feasibility of a computer-mediated BBI that links central neural functions between two biological entities, which may confer unexplored opportunities in the study of neuroscience with potential implications for therapeutic applications.open12

    Mind Control Robotic Arm: Augmentative and Alternative Communication in the Classroom Environment

    Get PDF
    In recent years, technological advancements have greatly benefited the field of prosthetics. A large number of disabled people depend on prosthetics because they are an important technology. In order to provide augmentative and alternative methods of communication to these disabled people with various neuromuscular disorders, we must make sure they are provided with appropriate equipment to express themselves. Different types of arms are evaluated under robotic technology in terms of resistance, usability, flexibility, cost, and potential (such as robotic, surgical, bionic, prosthetic, and static arms). The main problems with these techniques are their high cost, the difficulty of installing and maintaining them, and the possibility of requiring surgery may arise. As a result, this paper is going to provide a description of the idea for combining an EEG controlled smart prosthetic arm with a smart robotic hand. An electrode headset is used to capture the signals from the robotic hand in order to control the device. Creating a robot arm that can help disabled people lead a more independent life is the main objective of this paper

    Signal Processing Combined with Machine Learning for Biomedical Applications

    Get PDF
    The Master’s thesis is comprised of four projects in the realm of machine learning and signal processing. The abstract of the thesis is divided into four parts and presented as follows, Abstract 1: A Kullback-Leibler Divergence-Based Predictor for Inter-Subject Associative BCI. Inherent inter-subject variability in sensorimotor brain dynamics hinders the transferability of brain-computer interface (BCI) model parameters across subjects. An individual training session is essential for effective BCI control to compensate for variability. We report a Kullback-Leibler Divergence (KLD)-based predictor for inter-subject associative BCI. An online dataset comprising left/right hand, both feet, and tongue motor imagery tasks was used to show correlation between the proposed inter-subject predictor and BCI performance. Linear regression between the KLD predictor and BCI performance showed a strong inverse correlation (r = -0.62). The KLD predictor can act as an indicator for generalized inter-subject associative BCI designs. Abstract 2: Multiclass Sensorimotor BCI Based on Simultaneous EEG and fNIRS. Hybrid BCI (hBCI) utilizes multiple data modalities to acquire brain signals during motor execution (ME) tasks. Studies have shown significant enhancements in the classification of binary class ME-hBCIs; however, four-class ME-hBCI classification is yet to be done using multiclass algorithms. We present a quad-class classification of ME-hBCI tasks from simultaneous EEG-fNIRS recordings. Appropriate features were extracted from EEG-fNIRS signals and combined for hybrid features and classified with support vector machine. Results showed a significant increase in hybrid accuracy over single modalities and show hybrid method’s performance enhancement capability. Abstract 3: Deep Learning for Improved Inter-Subject EEG-fNIRS Hybrid BCI Performance. Multimodality based hybrid BCI has become famous for performance improvement; however, the inherent inter-subject and inter-session variation between participants brain dynamics poses obstacles in achieving high performance. This work presents an inter-subject hBCI to classify right/left-hand MI tasks from simultaneous EEG-fNIRS recordings of 29 healthy subjects. State-of-art features were extracted from EEG-fNIRS signals and combined for hybrid features, and finally, classified using deep Long short-term memory classifier. Results showed an increase in the inter-subject performance for the hybrid system while making the system more robust to brain dynamics change and hints to the feasibility of EEG-fNIRS based inter-subject hBCI. Abstract 4: Microwave Based Glucose Concentration Classification by Machine Learning. Non-invasive blood sugar measurement attracts increased attention in recent years, given the increase in diabetes-related complications and inconvenience in the traditional ways using blood. This work utilized machine learning (ML) algorithms to classify glucose concentration (GC) from the measured broadband microwave scattering signals (S11). An N-type microwave adapter pair was utilized to measure the sweeping frequency scattering-parameter (S-parameter) of the glucose solutions with GC varying from 50-10,000 dg/dL. Dielectric parameters were retrieved from the measured wideband complex S-parameters based on the modified Debye dielectric dispersion model. Results indicate that the best algorithm can achieve a perfect classification accuracy and suggests an alternate way to develop a GC detection method using ML algorithms

    Sliding Window along with EEGNet based Prediction of EEG Motor Imagery

    Get PDF
    The need for repeated calibration and accounting for inter subject variability is a major challenge for the practical applications of a brain-computer interface. The problem becomes more severe when it is applied for the neurorehabilitation of stroke patients where the brain-activation pattern may differ quite a bit as compared to healthy individuals due to the altered neurodynamics because of a lesion. There were several approaches to handle this problem in the past that depend on creating customized features that can be generalized among the individual subjects. Recently, several deep learning architectures came into the picture although they often failed to produce superior accuracy as compared to the traditional approaches and mostly do not follow an end-to-end architecture as they depend on custom features. However, a few of them have the promising ability to create more generalizable features in an end-to-end fashion such as the popular EEGNet architecture. Although EEGNet was applied for decoding stroke patient’s motor imagery (MI) data with limited success it failed to achieve superior performance over the traditional methods. In this study, we have augmented the EEGNet based decoding by introducing a post-processing step called the longest consecutive repetition (LCR) in a sliding window-based approach and named it EEGNet+LCR. The proposed approach was tested on a dataset of 10 hemiparetic stroke patients’ MI data set yielding superior performance against the only EEGNet and a more traditional approach such as common spatial pattern (CSP)+support vector machine (SVM) for both within- and cross-subject decoding of MI signals. We also observed comparable and satisfactory performance of the EEGNet+LCR in both the within- and cross-subject categories which are rarely found in literature making it a promising candidate to realize practically feasible BCI for stroke rehabilitation

    A bimodal deep learning architecture for EEGfNIRS decoding of overt and imagined speech

    Get PDF
    corecore