165 research outputs found

    Enhancing the Decoding Performance of Steady-State Visual Evoked Potentials based Brain-Computer Interface

    Get PDF
    Non-invasive Brain-Computer Interfaces (BCIs) based on steady-state visual evoked potential (SSVEP) responses are the most widely used BCI. SSVEP are responses elicited in the visual cortex when a user gazes at an object flickering at a certain frequency. In this thesis, we investigate different BCI system design parameters for enhancing the detection of SSVEP such as change in inter-stimulus distance (ISD), EEG channels, detection algorithms and training methodologies. Closely placed SSVEP stimuli compete for neural representations. This influences the performance and limits the flexibility of the stimulus interface. In this thesis, we study the influence of changing ISD on the decoding performance of an SSVEP BCI. We propose: (i) a user-specific channel selection method and (ii) using complex spectrum features as input to a convolutional neural network (C-CNN) to overcome this challenge. We also evaluate the proposed C-CNN method in a user-independent (UI) training scenario as this will lead to a minimal calibration system and provide the ability to run inference in a plug-and-play mode. The proposed methods were evaluated on a 7-class SSVEP dataset collected on 21 healthy participants (Dataset 1). The UI method was also assessed on a publicly available 12-class dataset collected on 10 healthy participants (Dataset 2). We compared the proposed methods with canonical correlation analysis (CCA) and CNN classification using magnitude spectrum features (M-CNN). We demonstrated that the user-specific channel set (UC) is robust to change in ISD (viewing angles of 5.24ᵒ, 8.53ᵒ, and 12.23ᵒ) compared to the classic 3-channel set (3C - O1, O2, Oz) and 6-channel set (6C - PO3, PO4, POz, O1, O2, Oz). A significant improvement in accuracy of over 5% (p=0.001) and a reduction in variation of 56% (p=0.035) was achieved across ISDs using the UC set compared to the 3C set and 6C set. Secondly, the proposed C-CNN method obtained a significantly higher classification accuracy across ISDs and window lengths compared to M-CNN and CCA. The average accuracy of the C-CNN increased by over 12.8% compared to CCA and an increase of over 6.5% compared to the M-CNN for the closest ISD across all window lengths was achieved. Thirdly, the C-CNN method achieved the highest accuracy in both UD and UI training scenarios on both 7-class and 12-class SSVEP Datasets. The overall accuracies of the different methods for 1 s window length for Dataset 1 were: CCA: 69.1±10.8%, UI-M-CNN: 73.5±16.1%, UI-C-CNN: 81.6±12.3%, UD-M-CNN: 87.8±7.6% and UD-C-CNN: 92.5±5%. And for Dataset 2 were: CCA: 62.7±21.5%, UI-M-CNN: 70.5±22%, UI-C-CNN: 81.6±18%, UD-M-CNN: 82.8±16.7%, and UD-C-CNN: 92.3±11.1%. In summary, using the complex spectrum features, the C-CNN likely learned to use both frequency and phase related information to classify SSVEP responses. Therefore, the CNN can be trained independent of the ISD resulting in a model that generalizes to other ISDs. This suggests that the proposed methods are robust to changes in inter-stimulus distance for SSVEP detection and provides increased flexibility for user interface design of SSVEP BCIs for commercial applications. Finally, the UI method provides a virtually calibration free approach to SSVEP BCI

    On Tackling Fundamental Constraints in Brain-Computer Interface Decoding via Deep Neural Networks

    Get PDF
    A Brain-Computer Interface (BCI) is a system that provides a communication and control medium between human cortical signals and external devices, with the primary aim to assist or to be used by patients who suffer from a neuromuscular disease. Despite significant recent progress in the area of BCI, there are numerous shortcomings associated with decoding Electroencephalography-based BCI signals in real-world environments. These include, but are not limited to, the cumbersome nature of the equipment, complications in collecting large quantities of real-world data, the rigid experimentation protocol and the challenges of accurate signal decoding, especially in making a system work in real-time. Hence, the core purpose of this work is to investigate improving the applicability and usability of BCI systems, whilst preserving signal decoding accuracy. Recent advances in Deep Neural Networks (DNN) provide the possibility for signal processing to automatically learn the best representation of a signal, contributing to improved performance even with a noisy input signal. Subsequently, this thesis focuses on the use of novel DNN-based approaches for tackling some of the key underlying constraints within the area of BCI. For example, recent technological improvements in acquisition hardware have made it possible to eliminate the pre-existing rigid experimentation procedure, albeit resulting in noisier signal capture. However, through the use of a DNN-based model, it is possible to preserve the accuracy of the predictions from the decoded signals. Moreover, this research demonstrates that by leveraging DNN-based image and signal understanding, it is feasible to facilitate real-time BCI applications in a natural environment. Additionally, the capability of DNN to generate realistic synthetic data is shown to be a potential solution in reducing the requirement for costly data collection. Work is also performed in addressing the well-known issues regarding subject bias in BCI models by generating data with reduced subject-specific features. The overall contribution of this thesis is to address the key fundamental limitations of BCI systems. This includes the unyielding traditional experimentation procedure, the mandatory extended calibration stage and sustaining accurate signal decoding in real-time. These limitations lead to a fragile BCI system that is demanding to use and only suited for deployment in a controlled laboratory. Overall contributions of this research aim to improve the robustness of BCI systems and enable new applications for use in the real-world

    Classification of Frequency and Phase Encoded Steady State Visual Evoked Potentials for Brain Computer Interface Speller Applications using Convolutional Neural Networks

    Get PDF
    Over the past decade there have been substantial improvements in vision based Brain-Computer Interface (BCI) spellers for quadriplegic patient populations. This thesis contains a review of the numerous bio-signals available to BCI researchers, as well as a brief chronology of foremost decoding methodologies used to date. Recent advances in classification accuracy and information transfer rate can be primarily attributed to time consuming patient specific parameter optimization procedures. The aim of the current study was to develop analysis software with potential ‘plug-in-and-play’ functionality. To this end, convolutional neural networks, presently established as state of the art analytical techniques for image processing, were utilized. The thesis herein defines deep convolutional neural network architecture for the offline classification of phase and frequency encoded SSVEP bio-signals. Networks were trained using an extensive 35 participant open source Electroencephalographic (EEG) benchmark dataset (Department of Bio-medical Engineering, Tsinghua University, Beijing). Average classification accuracies of 82.24% and information transfer rates of 22.22 bpm were achieved on a BCI naïve participant dataset for a 40 target alphanumeric display, in absence of any patient specific parameter optimization

    A supervised machine-learning method for detecting steady-state visually evoked potentials for use in brain computer interfaces: A comparative assessment

    Get PDF
    It is hypothesised that supervised machine learning on the estimated parameters output by a model for visually evoked potentials (VEPs), created by Kremlácek et al. (2002), could be used to classify steady-state visually evoked potentials (SSVEP) by frequency of stimulation. Classification of SSVEPs by stimulus frequency has application in SSVEP-based brain computer interfaces (BCI), where users are presented with flashing stimuli and user intent is decoded by identifying which stimulus the subject is attending to. We investigate the ability of the model of VEPs to fit the initial portions of SSVEPs, which are not yet in a steady state and contain characteristic features of VEPs superimposed with those of a steady state response. In this process the estimated parameters, as a function of the model for a given SSVEP response, were found. These estimated parameters were used to train several support vector machines (SVM) to classify the SSVEPs. Three initialisation conditions for the model are examined for their contribution to the goodness of fit and the subsequent classification accuracy, of the SVMs. It was found that the model was able to fit SSVEPs with a normalised root mean square error (NRMSE) of 27%, this performance did not match the expected NRMSE values of 13% reported by Kremlácek et al. (2002) for fits on VEPs. The fit data was assessed by the machine learning scheme and generated parameters which were classifiable by SVM above a random chance of 14% (Reang 9% to 28%). It was also shown that the selection of initial parameters had no distinct effect on the classification accuracy. Traditional classification approaches using spectral techniques such as Power Spectral Density Analysis (PSDA) and canonical correlation analysis (CCA) require a window period of data above 1 s to perform accurately enough for use in BCIs. The larger the window period of SSVEP data used the more the Information transfer rate (ITR) decreases. Undertaking a successful classification on only the initial 250 ms portions of SSVEP data would lead to an improved ITR and a BCI which is faster to use. Classification of each method was assessed at three SSVEP window periods (0.25, 0.5 and 1 s). Comparison of the three methods revealed that, on a whole CCA outperformed both the PSDA and SVM methods. While PSDA performance was in-line with that of the SVM method. All methods performed poorly at the window period of 0.25 s with an average accuracy converging on random chance - 14%. At the window period of 0.5 s the CCA only marginally outperformed the SVM method and at a time of 1 s the CCA method significantly (p<0.05) outperformed the SVM method. While the SVMs tended to improve with window period the results were not generally significant. It was found that certain SVMs (Representing a unique combination of subject, initial conditions and window period) achieved an accuracy as high as 30%. For a few instances the accuracy was comparable to the CCA method with a significance of 5%. While we were unable to predict which SVM would perform well for a given subject, it was demonstrated that with further refinement this novel method may produce results similar to or better than that of CCA

    Development of a practical and mobile brain-computer communication device for profoundly paralyzed individuals

    Full text link
    Thesis (Ph.D.)--Boston UniversityBrain-computer interface (BCI) technology has seen tremendous growth over the past several decades, with numerous groundbreaking research studies demonstrating technical viability (Sellers et al., 2010; Silvoni et al., 2011). Despite this progress, BCIs have remained primarily in controlled laboratory settings. This dissertation proffers a blueprint for translating research-grade BCI systems into real-world applications that are noninvasive and fully portable, and that employ intelligent user interfaces for communication. The proposed architecture is designed to be used by severely motor-impaired individuals, such as those with locked-in syndrome, while reducing the effort and cognitive load needed to communicate. Such a system requires the merging of two primary research fields: 1) electroencephalography (EEG)-based BCIs and 2) intelligent user interface design. The EEG-based BCI portion of this dissertation provides a history of the field, details of our software and hardware implementation, and results from an experimental study aimed at verifying the utility of a BCI based on the steady-state visual evoked potential (SSVEP), a robust brain response to visual stimulation at controlled frequencies. The visual stimulation, feature extraction, and classification algorithms for the BCI were specially designed to achieve successful real-time performance on a laptop computer. Also, the BCI was developed in Python, an open-source programming language that combines programming ease with effective handling of hardware and software requirements. The result of this work was The Unlock Project app software for BCI development. Using it, a four-choice SSVEP BCI setup was implemented and tested with five severely motor-impaired and fourteen control participants. The system showed a wide range of usability across participants, with classification rates ranging from 25-95%. The second portion of the dissertation discusses the viability of intelligent user interface design as a method for obtaining a more user-focused vocal output communication aid tailored to motor-impaired individuals. A proposed blueprint of this communication "app" was developed in this dissertation. It would make use of readily available laptop sensors to perform facial recognition, speech-to-text decoding, and geo-location. The ultimate goal is to couple sensor information with natural language processing to construct an intelligent user interface that shapes communication in a practical SSVEP-based BCI

    Signal Processing Using Non-invasive Physiological Sensors

    Get PDF
    Non-invasive biomedical sensors for monitoring physiological parameters from the human body for potential future therapies and healthcare solutions. Today, a critical factor in providing a cost-effective healthcare system is improving patients' quality of life and mobility, which can be achieved by developing non-invasive sensor systems, which can then be deployed in point of care, used at home or integrated into wearable devices for long-term data collection. Another factor that plays an integral part in a cost-effective healthcare system is the signal processing of the data recorded with non-invasive biomedical sensors. In this book, we aimed to attract researchers who are interested in the application of signal processing methods to different biomedical signals, such as an electroencephalogram (EEG), electromyogram (EMG), functional near-infrared spectroscopy (fNIRS), electrocardiogram (ECG), galvanic skin response, pulse oximetry, photoplethysmogram (PPG), etc. We encouraged new signal processing methods or the use of existing signal processing methods for its novel application in physiological signals to help healthcare providers make better decisions
    • …
    corecore