278 research outputs found

    EEG-Analysis for Cognitive Failure Detection in Driving Using Type-2 Fuzzy Classifiers

    Get PDF
    The paper aims at detecting on-line cognitive failures in driving by decoding the EEG signals acquired during visual alertness, motor-planning and motor-execution phases of the driver. Visual alertness of the driver is detected by classifying the pre-processed EEG signals obtained from his pre-frontal and frontal lobes into two classes: alert and non-alert. Motor-planning performed by the driver using the pre-processed parietal signals is classified into four classes: braking, acceleration, steering control and no operation. Cognitive failures in motor-planning are determined by comparing the classified motor-planning class of the driver with the ground truth class obtained from the co-pilot through a hand-held rotary switch. Lastly, failure in motor execution is detected, when the time-delay between the onset of motor imagination and the EMG response exceeds a predefined duration. The most important aspect of the present research lies in cognitive failure classification during the planning phase. The complexity in subjective plan classification arises due to possible overlap of signal features involved in braking, acceleration and steering control. A specialized interval/general type-2 fuzzy set induced neural classifier is employed to eliminate the uncertainty in classification of motor-planning. Experiments undertaken reveal that the proposed neuro-fuzzy classifier outperforms traditional techniques in presence of external disturbances to the driver. Decoding of visual alertness and motor-execution are performed with kernelized support vector machine classifiers. An analysis reveals that at a driving speed of 64 km/hr, the lead-time is over 600 milliseconds, which offer a safe distance of 10.66 meters

    Support vector machines to detect physiological patterns for EEG and EMG-based human-computer interaction:a review

    Get PDF
    Support vector machines (SVMs) are widely used classifiers for detecting physiological patterns in human-computer interaction (HCI). Their success is due to their versatility, robustness and large availability of free dedicated toolboxes. Frequently in the literature, insufficient details about the SVM implementation and/or parameters selection are reported, making it impossible to reproduce study analysis and results. In order to perform an optimized classification and report a proper description of the results, it is necessary to have a comprehensive critical overview of the applications of SVM. The aim of this paper is to provide a review of the usage of SVM in the determination of brain and muscle patterns for HCI, by focusing on electroencephalography (EEG) and electromyography (EMG) techniques. In particular, an overview of the basic principles of SVM theory is outlined, together with a description of several relevant literature implementations. Furthermore, details concerning reviewed papers are listed in tables and statistics of SVM use in the literature are presented. Suitability of SVM for HCI is discussed and critical comparisons with other classifiers are reported

    EEG-based brain-computer interfaces using motor-imagery: techniques and challenges.

    Get PDF
    Electroencephalography (EEG)-based brain-computer interfaces (BCIs), particularly those using motor-imagery (MI) data, have the potential to become groundbreaking technologies in both clinical and entertainment settings. MI data is generated when a subject imagines the movement of a limb. This paper reviews state-of-the-art signal processing techniques for MI EEG-based BCIs, with a particular focus on the feature extraction, feature selection and classification techniques used. It also summarizes the main applications of EEG-based BCIs, particularly those based on MI data, and finally presents a detailed discussion of the most prevalent challenges impeding the development and commercialization of EEG-based BCIs

    Development of a practical and mobile brain-computer communication device for profoundly paralyzed individuals

    Full text link
    Thesis (Ph.D.)--Boston UniversityBrain-computer interface (BCI) technology has seen tremendous growth over the past several decades, with numerous groundbreaking research studies demonstrating technical viability (Sellers et al., 2010; Silvoni et al., 2011). Despite this progress, BCIs have remained primarily in controlled laboratory settings. This dissertation proffers a blueprint for translating research-grade BCI systems into real-world applications that are noninvasive and fully portable, and that employ intelligent user interfaces for communication. The proposed architecture is designed to be used by severely motor-impaired individuals, such as those with locked-in syndrome, while reducing the effort and cognitive load needed to communicate. Such a system requires the merging of two primary research fields: 1) electroencephalography (EEG)-based BCIs and 2) intelligent user interface design. The EEG-based BCI portion of this dissertation provides a history of the field, details of our software and hardware implementation, and results from an experimental study aimed at verifying the utility of a BCI based on the steady-state visual evoked potential (SSVEP), a robust brain response to visual stimulation at controlled frequencies. The visual stimulation, feature extraction, and classification algorithms for the BCI were specially designed to achieve successful real-time performance on a laptop computer. Also, the BCI was developed in Python, an open-source programming language that combines programming ease with effective handling of hardware and software requirements. The result of this work was The Unlock Project app software for BCI development. Using it, a four-choice SSVEP BCI setup was implemented and tested with five severely motor-impaired and fourteen control participants. The system showed a wide range of usability across participants, with classification rates ranging from 25-95%. The second portion of the dissertation discusses the viability of intelligent user interface design as a method for obtaining a more user-focused vocal output communication aid tailored to motor-impaired individuals. A proposed blueprint of this communication "app" was developed in this dissertation. It would make use of readily available laptop sensors to perform facial recognition, speech-to-text decoding, and geo-location. The ultimate goal is to couple sensor information with natural language processing to construct an intelligent user interface that shapes communication in a practical SSVEP-based BCI

    Decoding Neural Signals with Computational Models: A Systematic Review of Invasive BMI

    Full text link
    There are significant milestones in modern human's civilization in which mankind stepped into a different level of life with a new spectrum of possibilities and comfort. From fire-lighting technology and wheeled wagons to writing, electricity and the Internet, each one changed our lives dramatically. In this paper, we take a deep look into the invasive Brain Machine Interface (BMI), an ambitious and cutting-edge technology which has the potential to be another important milestone in human civilization. Not only beneficial for patients with severe medical conditions, the invasive BMI technology can significantly impact different technologies and almost every aspect of human's life. We review the biological and engineering concepts that underpin the implementation of BMI applications. There are various essential techniques that are necessary for making invasive BMI applications a reality. We review these through providing an analysis of (i) possible applications of invasive BMI technology, (ii) the methods and devices for detecting and decoding brain signals, as well as (iii) possible options for stimulating signals into human's brain. Finally, we discuss the challenges and opportunities of invasive BMI for further development in the area.Comment: 51 pages, 14 figures, review articl

    Emotion-Inducing Imagery versus Motor Imagery for a Brain-Computer Interface

    Get PDF

    Data Analytics in Steady-State Visual Evoked Potential-based Brain-Computer Interface: A Review

    Get PDF
    Electroencephalograph (EEG) has been widely applied for brain-computer interface (BCI) which enables paralyzed people to directly communicate with and control of external devices, due to its portability, high temporal resolution, ease of use and low cost. Of various EEG paradigms, steady-state visual evoked potential (SSVEP)-based BCI system which uses multiple visual stimuli (such as LEDs or boxes on a computer screen) flickering at different frequencies has been widely explored in the past decades due to its fast communication rate and high signal-to-noise ratio. In this paper, we review the current research in SSVEP-based BCI, focusing on the data analytics that enables continuous, accurate detection of SSVEPs and thus high information transfer rate. The main technical challenges, including signal pre-processing, spectrum analysis, signal decomposition, spatial filtering in particular canonical correlation analysis and its variations, and classification techniques are described in this paper. Research challenges and opportunities in spontaneous brain activities, mental fatigue, transfer learning as well as hybrid BCI are also discussed

    Analysis of sensorimotor rhythms based on lower-limbs motor imagery for brain-computer interface

    Get PDF
    Over recent years significant advancements in the field of assistive technologies have been observed. One of the paramount needs for the development and advancement that urged researchers to contribute in the field other than congenital or diagnosed chronic disorders, is the rising number of affectees from accidents, natural calamity (due to climate change), or warfare, worldwide resulting in spinal cord injuries (SCI), neural disorder, or amputation (interception) of limbs, that impede a human to live a normal life. In addition to this, more than ten million people in the world are living with some form of handicap due to the central nervous system (CNS) disorder, which is precarious. Biomedical devices for rehabilitation are the center of research focus for many years. For people with lost motor control, or amputation, but unscathed sensory control, instigation of control signals from the source, i.e. electrophysiological signals, is vital for seamless control of assistive biomedical devices. Control signals, i.e. motion intentions, arouse    in the sensorimotor cortex of the brain that can be detected using invasive or non-invasive modality. With non-invasive modality, the electroencephalography (EEG) is used to record these motion intentions encoded in electrical activity of the cortex, and are deciphered to recognize user intent for locomotion. They are further transferred to the actuator, or end effector of the assistive device for control purposes. This can be executed via the brain-computer interface (BCI) technology. BCI is an emerging research field that establishes a real-time bidirectional connection between the human brain and a computer/output device. Amongst its diverse applications, neurorehabilitation to deliver sensory feedback and brain controlled biomedical devices for rehabilitation are most popular. While substantial literature on control of upper-limb assistive technologies controlled via BCI is there, less is known about the lower-limb (LL) control of biomedical devices for navigation or gait assistance via BCI. The types  of EEG signals compatible with an independent BCI are the oscillatory/sensorimotor rhythms (SMR) and event-related potential (ERP). These signals have successfully been used in BCIs for navigation control of assistive devices. However, ERP paradigm accounts for a voluminous setup for stimulus presentation to the user during operation of BCI assistive device. Contrary to this, the SMR does not require large setup for activation of cortical activity; it instead depends on the motor imagery (MI) that is produced synchronously or asynchronously by the user. MI is a covert cognitive process also termed kinaesthetic motor imagery (KMI) and elicits clearly after rigorous training trials, in form of event-related desynchronization (ERD) or synchronization (ERS), depending on imagery activity or resting period. It usually comprises of limb movement tasks, but is not limited to it in a BCI paradigm. In order to produce detectable features that correlate to the user¿s intent, selection of cognitive task is an important aspect to improve the performance of a BCI. MI used in BCI predominantly remains associated with the upper- limbs, particularly hands, due to the somatotopic organization of the motor cortex. The hand representation area is substantially large, in contrast to the anatomical location of the LL representation areas in the human sensorimotor cortex. The LL area is located within the interhemispheric fissure, i.e. between the mesial walls of both hemispheres of the cortex. This makes it arduous to detect EEG features prompted upon imagination of LL. Detailed investigation of the ERD/ERS in the mu and beta oscillatory rhythms during left and right LL KMI tasks is required, as the user¿s intent to walk is of paramount importance associated to everyday activity. This is an important area of research, followed by the improvisation of the already existing rehabilitation system that serves the LL affectees. Though challenging, solution to these issues is also imperative for the development of robust controllers that follow the asynchronous BCI paradigms to operate LL assistive devices seamlessly. This thesis focusses on the investigation of cortical lateralization of ERD/ERS in the SMR, based on foot dorsiflexion KMI and knee extension KMI separately. This research infers the possibility to deploy these features in real-time BCI by finding maximum possible classification accuracy from the machine learning (ML) models. EEG signal is non-stationary, as it is characterized by individual-to-individual and trial-to-trial variability, and a low signal-to-noise ratio (SNR), which is challenging. They are high in dimension with relatively low number of samples available for fitting ML models to the data. These factors account for ML methods that were developed into the tool of choice  to analyse single-trial EEG data. Hence, the selection of appropriate ML model for true detection of class label with no tradeoff of overfitting is crucial. The feature extraction part of the thesis constituted of testing the band-power (BP) and the common spatial pattern (CSP) methods individually. The study focused on the synchronous BCI paradigm. This was to ensure the exhibition of SMR for the possibility of a practically viable control system in a BCI. For the left vs. right foot KMI, the objective was to distinguish the bilateral tasks, in order to use them as unilateral commands in a 2-class BCI for controlling/navigating a robotic/prosthetic LL for rehabilitation. Similar was the approach for left-right knee KMI. The research was based on four main experimental studies. In addition to the four studies, the research is also inclusive of the comparison of intra-cognitive tasks within the same limb, i.e. left foot vs. left knee and right foot vs. right knee tasks, respectively (Chapter 4). This added to another novel contribution towards the findings based on comparison of different tasks within the same LL. It provides basis to increase the dimensionality of control signals within one BCI paradigm, such as a BCI-controlled LL assistive device with multiple degrees of freedom (DOF) for restoration of locomotion function. This study was based on analysis of statistically significant mu ERD feature using BP feature extraction method. The first stage of this research comprised of the left vs. right foot KMI tasks, wherein the ERD/ERS that elicited in the mu-beta rhythms were analysed using BP feature extraction method (Chapter 5). Three individual features, i.e. mu ERD, beta ERD, and beta ERS were investigated on EEG topography and time-frequency (TF) maps, and average time course of power percentage, using the common average reference and bipolar reference methods. A comparative study was drawn for both references to infer the optimal method. This was followed by ML, i.e. classification of the three feature vectors (mu ERD, beta ERD, and beta ERS), using linear discriminant analysis (LDA), support vector machine (SVM), and k-nearest neighbour (KNN) algorithms, separately. Finally, the multiple correction statistical tests were done, in order to predict maximum possible classification accuracy amongst all paradigms for the most significant feature. All classifier models were supported with the statistical techniques of k-fold cross validation and evaluation of area under receiver-operator characteristic curves (AUC-ROC) for prediction of the true class label. The highest classification accuracy of 83.4% ± 6.72 was obtained with KNN model for beta ERS feature. The next study was based on enhancing the classification accuracy obtained from previous study. It was based on using similar cognitive tasks as study in Chapter 5, however deploying different methodology for feature extraction and classification procedure. In the second study, ERD/ERS from mu and beta rhythms were extracted using CSP and filter bank common spatial pattern (FBCSP) algorithms, to optimize the individual spatial patterns (Chapter 6). This was followed by ML process, for which the supervised logistic regression (Logreg) and LDA were deployed separately. Maximum classification accuracy resulted in 77.5% ± 4.23 with FBCSP feature vector and LDA model, with a maximum kappa coefficient of 0.55 that is in the moderate range of agreement between the two classes. The left vs. right foot discrimination results were nearly same, however the BP feature vector performed better than CSP. The third stage was based on the deployment of novel cognitive task of left vs. right knee extension KMI. Analysis of the ERD/ERS in the mu-beta rhythms was done for verification of cortical lateralization via BP feature vector (Chapter 7). Similar to Chapter 5, in this study the analysis of ERD/ERS features was done on the EEG topography and TF maps, followed by the determination of average time course and peak latency of feature occurrence. However, for this study, only mu ERD and beta ERS features were taken into consideration and the EEG recording method only comprised of common average reference. This was due to the established results from the foot study earlier, in Chapter 5, where beta ERD features showed less average amplitude. The LDA and KNN classification algorithms were employed. Unexpectedly, the left vs. right knee KMI reflected the highest accuracy of 81.04% ± 7.5 and an AUC-ROC = 0.84, strong enough to be used in a real-time BCI as two independent control features. This was using KNN model for beta ERS feature. The final study of this research followed the same paradigm as used in Chapter 6, but for left vs. right knee KMI cognitive task (Chapter 8). Primarily this study aimed at enhancing the resulting accuracy from Chapter 7, using CSP and FBCSP methods with Logreg and LDA models respectively. Results were in accordance with those of the already established foot KMI study, i.e. BP feature vector performed better than the CSP. Highest classification accuracy of 70.00% ± 2.85 with kappa score of 0.40 was obtained with Logreg using FBCSP feature vector. Results stipulated the utilization of ERD/ERS in mu and beta bands, as independent control features for discrimination of bilateral foot or the novel bilateral knee KMI tasks. Resulting classification accuracies implicate that any 2-class BCI, employing unilateral foot, or knee KMI, is suitable for real-time implementation. In conclusion, this thesis demonstrates the possible EEG pre-processing, feature extraction and classification methods to instigate a real-time BCI from the conducted studies. Following this, the critical aspects of latency in information transfer rate, SNR, and tradeoff between dimensionality and overfitting needs to be taken care of, during design of real-time BCI controller. It also highlights that there is a need for consensus over the development of standardized methods of cognitive tasks for MI based BCI. Finally, the application of wireless EEG for portable assistance is essential as it will contribute to lay the foundations of the development of independent asynchronous BCI based on SMR

    Compact and interpretable convolutional neural network architecture for electroencephalogram based motor imagery decoding

    Get PDF
    Recently, due to the popularity of deep learning, the applicability of deep Neural Networks (DNN) algorithms such as the convolutional neural networks (CNN) has been explored in decoding electroencephalogram (EEG) for Brain-Computer Interface (BCI) applications. This allows decoding of the EEG signals end-to-end, eliminating the tedious process of manually tuning each process in the decoding pipeline. However, the current DNN architectures, consisting of multiple hidden layers and numerous parameters, are not developed for EEG decoding and classification tasks, making them underperform when decoding EEG signals. Apart from this, a DNN is typically treated as a black box and interpreting what the network learns in solving the classification task is difficult, hindering from performing neurophysiological validation of the network. This thesis proposes an improved and compact CNN architecture for motor imagery decoding based on the adaptation of SincNet, which was initially developed for speaker recognition task from the raw audio input. Such adaptation allows for a very compact end-to-end neural network with state-of-the-art (SOTA) performances and enables network interpretability for neurophysiological validation in terms of cortical rhythms and spatial analysis. In order to validate the performance of proposed algorithms, two datasets were used; the first is the publicly available BCI Competition IV dataset 2a, which is often used as a benchmark in validating motor imagery (MI) classification algorithms, and a primary data that was initially collected to study the difference between motor imagery and mental rotation task associated motor imagery (MI+MR) BCI. The latter was also used in this study to test the plausibility of the proposed algorithm in highlighting the differences in cortical rhythms. In both datasets, the proposed Sinc adapted CNN algorithms show competitive decoding performance in comparisons with SOTA CNN models, where up to 87% decoding accuracy was achieved in BCI Competition IV dataset 2a and up to 91% decoding accuracy when using the primary MI+MR data. Such decoding performance was achieved with the lowest number of trainable parameters (26.5% - 34.1% reduction in the number of parameters compared to its non-Sinc counterpart). In addition, it was shown that the proposed architecture performs a cleaner band-pass, highlighting the necessary frequency bands that focus on important cortical rhythms during task execution, thus allowing for the development of the proposed Spatial Filter Visualization algorithm. Such characteristic was crucial for the neurophysiological interpretation of the learned spatial features and was not previously established with the benchmarked SOTA methods
    corecore