21 research outputs found

    Classification of Frequency and Phase Encoded Steady State Visual Evoked Potentials for Brain Computer Interface Speller Applications using Convolutional Neural Networks

    Get PDF
    Over the past decade there have been substantial improvements in vision based Brain-Computer Interface (BCI) spellers for quadriplegic patient populations. This thesis contains a review of the numerous bio-signals available to BCI researchers, as well as a brief chronology of foremost decoding methodologies used to date. Recent advances in classification accuracy and information transfer rate can be primarily attributed to time consuming patient specific parameter optimization procedures. The aim of the current study was to develop analysis software with potential ‘plug-in-and-play’ functionality. To this end, convolutional neural networks, presently established as state of the art analytical techniques for image processing, were utilized. The thesis herein defines deep convolutional neural network architecture for the offline classification of phase and frequency encoded SSVEP bio-signals. Networks were trained using an extensive 35 participant open source Electroencephalographic (EEG) benchmark dataset (Department of Bio-medical Engineering, Tsinghua University, Beijing). Average classification accuracies of 82.24% and information transfer rates of 22.22 bpm were achieved on a BCI naïve participant dataset for a 40 target alphanumeric display, in absence of any patient specific parameter optimization

    Decoding P300 evoked potentials for Brain Computer Interfaces (BCI) aimed at assisting potential end-users at home

    Get PDF
    La presente Tesis Doctoral propone el empleo de potenciales evocados P300 como señal de control en sistemas BCI diseñados para usuarios finales, es decir, para personas con grave discapacidad. La atención selectiva a estímulos específicos e infrecuentes evoca una actividad cerebral específica conocida como P300. Esta respuesta aparece en las zonas central y parietal de la corteza cerebral unos 300 ms después de la presentación del estímulo. En este estudio, se ha investigado una nueva herramienta BCI asistiva para control del entorno en el hogar. La aplicación propuesta se basa en las respuestas evocadas P300 a estímulos infrecuentes, también conocido como paradigma oddball. Los sistemas BCI basados en P300 podrían ser los más adecuados para las personas con grave discapacidad ya que no requieren de una etapa exhaustiva de entrenamiento. Además, el paradigma P300 típico permite seleccionar el símbolo deseado entre las múltiples opciones presentadas en la pantalla de una manera rápida simplemente fijando la atención en él. La metodología propuesta en este trabajo está enfocada en el usuario final. Así, tanto el diseño, como los experimentos y la evaluación se centran las necesidades del usuario final.Departamento de Teoría de la Señal y Comunicaciones e Ingeniería Telemátic

    A Bayesian machine learning framework for true zero-training brain-computer interfaces

    Get PDF
    Brain-Computer Interfaces (BCI) are developed to allow the user to take control of a computer (e.g. a spelling application) or a device (e.g. a robotic arm) by using just his brain signals. The concept of BCI was introduced in 1973 by Jacques Vidal. The early types of BCI relied on tedious user training to enable them to modulate their brain signals such that they can take control over the computer. Since then, training has shifted from the user to the computer. Hence, modern BCI systems rely on a calibration session, during which the user is instructed to perform specific tasks. The result of this calibration recording is a labelled data-set that can be used to train the (supervised) machine learning algorithm. Such a calibration recording is, however, of no direct use for the end user. Hence, it is especially important for patients to limit this tedious process. For this reason, the BCI community has invested a lot of effort in reducing the dependency on calibration data. Nevertheless, despite these efforts, true zero-training BCIs are rather rare. Event-Related Potential based spellers One of the most common types of BCI is the Event-Related Potentials (ERP) based BCI, which was invented by Farwell and Donchin in 1988. In the ERP-BCI, actions, such as spelling a letter, are coupled to specific stimuli. The computer continuously presents these stimuli to the user. By attending a specific stimulus, the user is able to select an action. More concretely, in the original ERP-BCI, these stimuli were the intensifications of rows and column in a matrix of symbols on a computer screen. By detecting which row and which column elicit an ERP response, the computer can infer which symbol the user wants to spell. Initially, the ERP-BCI was aimed at restoring communication, but novel applications have been proposed too. Examples are web browsing, gaming, navigation and painting. Additionally, current BCIs are not limited to using visual stimuli, but variations using auditory or tactile stimuli have been developed as well. In their quest to improve decoding performance in the ERP-BCI, the BCI community has developed increasingly more complex machine learning algorithms. However, nearly all of them rely on intensive subject-specific fine-tuning. The current generation of decoders has gone beyond a standard ERP classifier and they incorporate language models, which are similar to a spelling corrector on a computer, and extensions to speed up the communication, commonly referred to as dynamic stopping. Typically, all these different components are separate entities that have to be tied together by heuristics. This introduces an additional layer of complexity and the result is that these state of the art methods are difficult to optimise due to the large number of free parameters. We have proposed a single unified probabilistic model that integrates language models and a natural dynamic stopping strategy. This coherent model is able to achieve state of the art performance, while at the same time, minimising the complexity of subject-specific tuning on labelled data. A second and major contribution of this thesis is the development of the first unsupervised decoder for ERP spellers. Recall that typical decoders have to be tuned on labelled data for each user individually. Moreover, recording this labelled data is a tedious process, which has no direct use for the end user. The unsupervised approach, which is an extension of our unified probabilistic model, is able to learn how to decode a novel user’s brain signals without requiring such a labelled dataset. Instead, the user starts using the system and in the meantime the decoder is learning how to decode the brain signals. This method has been evaluated extensively, both in an online and offline setting. Our offline validation was executed on three different datasets of visual ERP data in the standard matrix speller. Combined, these datasets contain 25 different subjects. Additionally, we present the results of an offline evaluation on auditory ERP data from 21 subjects. Due to a less clear signal, this auditory ERP data present an even greater challenge than visual ERP data. On top of that we present the results from an online study on auditory ERP, which was conducted in cooperation with Michael Tangermann, Martijn Schreuder and Klaus-Robert Müller at the TU-Berlin. Our simulations indicate that when enough unlabelled data is available, the unsupervised method can compete with state of the art supervised approaches. Furthermore, when non-stationarity is present in the EEG recordings, e.g. due to fatigue during longer experiments, then the unsupervised approach can outperform supervised methods by adapting to these changes in the data. However, the limitation of the unsupervised method lies in the fact that while labelled data is not required, a substantial amount of unlabelled data must be processed before a reliable model can be found. Hence, during online experiments the model suffers from a warm-up period. During this warm-up period, the output is unreliable, but the mistakes made during this warm-up period can be corrected automatically when enough data is processed. To maximise the usability of ERP-BCI, the warm-up of the unsupervised method has to be minimised. For this reason, we propose one of the first transfer learning methods for ERP-BCI. The idea behind transfer learning is to share information on how to decode the brain signals between users. The concept of transfer learning stands in stark contrast with the strong tradition of subject-specific decoders commonly used by the BCI community. Nevertheless, by extending our unified model with inter-subject transfer learning, we are able to build a decoder that can decode the brain signals of novel users without any subject-specific training. Unfortunately, basic transfer learning models do perform as well as subject-specific (supervised models). For this reason, we have combined our transfer learning approach with our unsupervised learning approach to adapt it during usage to a highly accurate subject-specific model. Analogous to our unsupervised model, we have performed an extensive evaluation of transfer learning with unsupervised adaptation. We tested the model offline on visual ERP data from 22 subjects and on auditory ERP data from 21 subjects. Additionally, we present the results from an online study, which was also performed at the TUBerlin, where we evaluate transfer learning online on the auditory AMUSE paradigm. From these experiments, we can conclude that transfer learning in combination with unsupervised adaptation results in a true zero training BCI, that can compete with state of the art supervised models, without needing a single data point from a calibration recording. This method allows us to build a BCI that works out of the box

    Development of a practical and mobile brain-computer communication device for profoundly paralyzed individuals

    Full text link
    Thesis (Ph.D.)--Boston UniversityBrain-computer interface (BCI) technology has seen tremendous growth over the past several decades, with numerous groundbreaking research studies demonstrating technical viability (Sellers et al., 2010; Silvoni et al., 2011). Despite this progress, BCIs have remained primarily in controlled laboratory settings. This dissertation proffers a blueprint for translating research-grade BCI systems into real-world applications that are noninvasive and fully portable, and that employ intelligent user interfaces for communication. The proposed architecture is designed to be used by severely motor-impaired individuals, such as those with locked-in syndrome, while reducing the effort and cognitive load needed to communicate. Such a system requires the merging of two primary research fields: 1) electroencephalography (EEG)-based BCIs and 2) intelligent user interface design. The EEG-based BCI portion of this dissertation provides a history of the field, details of our software and hardware implementation, and results from an experimental study aimed at verifying the utility of a BCI based on the steady-state visual evoked potential (SSVEP), a robust brain response to visual stimulation at controlled frequencies. The visual stimulation, feature extraction, and classification algorithms for the BCI were specially designed to achieve successful real-time performance on a laptop computer. Also, the BCI was developed in Python, an open-source programming language that combines programming ease with effective handling of hardware and software requirements. The result of this work was The Unlock Project app software for BCI development. Using it, a four-choice SSVEP BCI setup was implemented and tested with five severely motor-impaired and fourteen control participants. The system showed a wide range of usability across participants, with classification rates ranging from 25-95%. The second portion of the dissertation discusses the viability of intelligent user interface design as a method for obtaining a more user-focused vocal output communication aid tailored to motor-impaired individuals. A proposed blueprint of this communication "app" was developed in this dissertation. It would make use of readily available laptop sensors to perform facial recognition, speech-to-text decoding, and geo-location. The ultimate goal is to couple sensor information with natural language processing to construct an intelligent user interface that shapes communication in a practical SSVEP-based BCI

    Improving the Generalisability of Brain Computer Interface Applications via Machine Learning and Search-Based Heuristics

    Get PDF
    Brain Computer Interfaces (BCI) are a domain of hardware/software in which a user can interact with a machine without the need for motor activity, communicating instead via signals generated by the nervous system. These interfaces provide life-altering benefits to users, and refinement will both allow their application to a much wider variety of disabilities, and increase their practicality. The primary method of acquiring these signals is Electroencephalography (EEG). This technique is susceptible to a variety of different sources of noise, which compounds the inherent problems in BCI training data: large dimensionality, low numbers of samples, and non-stationarity between users and recording sessions. Feature Selection and Transfer Learning have been used to overcome these problems, but they fail to account for several characteristics of BCI. This thesis extends both of these approaches by the use of Search-based algorithms. Feature Selection techniques, known as Wrappers use ‘black box’ evaluation of feature subsets, leading to higher classification accuracies than ranking methods known as Filters. However, Wrappers are more computationally expensive, and are prone to over-fitting to training data. In this thesis, we applied Iterated Local Search (ILS) to the BCI field for the first time in literature, and demonstrated competitive results with state-of-the-art methods such as Least Absolute Shrinkage and Selection Operator and Genetic Algorithms. We then developed ILS variants with guided perturbation operators. Linkage was used to develop a multivariate metric, Intrasolution Linkage. This takes into account pair-wise dependencies of features with the label, in the context of the solution. Intrasolution Linkage was then integrated into two ILS variants. The Intrasolution Linkage Score was discovered to have a stronger correlation with the solutions predictive accuracy on unseen data than Cross Validation Error (CVE) on the training set, the typical approach to feature subset evaluation. Mutual Information was used to create Minimum Redundancy Maximum Relevance Iterated Local Search (MRMR-ILS). In this algorithm, the perturbation operator was guided using an existing Mutual Information measure, and compared with current Filter and Wrapper methods. It was found to achieve generally lower CVE rates and higher predictive accuracy on unseen data than existing algorithms. It was also noted that solutions found by the MRMR-ILS provided CVE rates that had a stronger correlation with the accuracy on unseen data than solutions found by other algorithms. We suggest that this may be due to the guided perturbation leading to solutions that are richer in Mutual Information. Feature Selection reduces computational demands and can increase the accuracy of our desired models, as evidenced in this thesis. However, limited quantities of training samples restricts these models, and greatly reduces their generalisability. For this reason, utilisation of data from a wide range of users is an ideal solution. Due to the differences in neural structures between users, creating adequate models is difficult. We adopted an existing state-of-the-art ensemble technique Ensemble Learning Generic Information (ELGI), and developed an initial optimisation phase. This involved using search to transplant instances between user subsets to increase the generalisability of each subset, before combination in the ELGI. We termed this Evolved Ensemble Learning Generic Information (eELGI). The eELGI achieved higher accuracy than user-specific BCI models, across all eight users. Optimisation of the training dataset allowed smaller training sets to be used, offered protection against neural drift, and created models that performed similarly across participants, regardless of neural impairment. Through the introduction and hybridisation of search based algorithms to several problems in BCI we have been able to show improvements in modelling accuracy and efficiency. Ultimately, this represents a step towards more practical BCI systems that will provide life altering benefits to users

    Applications of non-invasive brain-computer interfaces for communication and affect recognition

    Get PDF
    Doctor of PhilosophyDepartment of Electrical and Computer EngineeringDavid E. ThompsonVarious assistive technologies are available for people with communication disorders. While these technologies are quite useful for moderate to severe movement impairments, certain progressive diseases can cause a total locked-in state (TLIS). These conditions include amyotrophic lateral sclerosis (ALS), neuromuscular disease (NMD), and several other disorders that can cause impairment between the neural pathways and the muscles. For people in a locked-in state (LIS), brain-computer interfaces (BCIs) may be the only possible solution. BCIs could help to restore communication to these people, with the help of external devices and neural recordings. The present dissertation investigates the role of latency jitter on BCIs system performance and, at the same time, the possibility of affect recognition using BCIs. BCIs that can recognize human affect are referred to as affective brain-computer interfaces (aBCIs). These aBCIs are a relatively new area of research in affective computing. Estimation of affective states can improve human-computer interaction as well as improve the care of people with severe disabilities. The present work used a publicly available dataset as well as a dataset collected at the Brain and Body Sensing Lab at K-State to assess the effectiveness of EEG recordings in recognizing affective states. This work proposed an extended classifier-based latency estimation (CBLE) method using sparse autoencoders (SAE) to investigate the role of latency jitter on BCI system performance. The recent emergence of autoencoders motivated the present work to develop an SAE based CBLE method. Here, the newly-developed SAE-based CBLE method is applied to a newly-collected dataset. Results from our data showed a significant (p < 0.001) negative correlation between BCI accuracy and estimated latency jitter. Furthermore, the SAE-based CBLE method is also able to predict BCI accuracy. In the aBCI-related investigation, this work explored the effectiveness of different features extracted from EEG to identify the affect of a user who was experiencing affective stimuli. Furthermore, this dissertation reviewed articles that used the Database for Emotion Analysis Using Physiological Signals (DEAP) (i.e., a publicly available affective database) and found that a significant number of studies did not consider the presence of the class imbalance in the dataset. Failing to consider class imbalance creates misleading results. Furthermore, ignoring class imbalance makes comparing results between studies impossible, since different datasets will have different class imbalances. Class imbalance also shifts the chance level. Hence, it is vital to consider class bias while determining if the results are above chance. This dissertation suggests the use of balanced accuracy as a performance metric and its posterior distribution for computing confidence intervals to account for the effect of class imbalance

    Co-adaptive control strategies in assistive Brain-Machine Interfaces

    Get PDF
    A large number of people with severe motor disabilities cannot access any of the available control inputs of current assistive products, which typically rely on residual motor functions. These patients are therefore unable to fully benefit from existent assistive technologies, including communication interfaces and assistive robotics. In this context, electroencephalography-based Brain-Machine Interfaces (BMIs) offer a potential non-invasive solution to exploit a non-muscular channel for communication and control of assistive robotic devices, such as a wheelchair, a telepresence robot, or a neuroprosthesis. Still, non-invasive BMIs currently suffer from limitations, such as lack of precision, robustness and comfort, which prevent their practical implementation in assistive technologies. The goal of this PhD research is to produce scientific and technical developments to advance the state of the art of assistive interfaces and service robotics based on BMI paradigms. Two main research paths to the design of effective control strategies were considered in this project. The first one is the design of hybrid systems, based on the combination of the BMI together with gaze control, which is a long-lasting motor function in many paralyzed patients. Such approach allows to increase the degrees of freedom available for the control. The second approach consists in the inclusion of adaptive techniques into the BMI design. This allows to transform robotic tools and devices into active assistants able to co-evolve with the user, and learn new rules of behavior to solve tasks, rather than passively executing external commands. Following these strategies, the contributions of this work can be categorized based on the typology of mental signal exploited for the control. These include: 1) the use of active signals for the development and implementation of hybrid eyetracking and BMI control policies, for both communication and control of robotic systems; 2) the exploitation of passive mental processes to increase the adaptability of an autonomous controller to the user\u2019s intention and psychophysiological state, in a reinforcement learning framework; 3) the integration of brain active and passive control signals, to achieve adaptation within the BMI architecture at the level of feature extraction and classification

    Unlocking Possiblities while Preserving Performance: Putting the "Interface" back in Brain-Computer Interface.

    Full text link
    Brain-computer interface (BCI) technology offers the hope of communication and control for people with the most severe motor impairments. Surveys of user populations indicate that users are interested in BCIs for a variety of tasks. Thus, an eventual goal for the BCI field should be flexible interfaces usable for multiple purposes. Yet these interfaces must not sacrifice performance for the sake of flexibility – BCI performance is already poor compared to existing assistive technology (AT). To succeed in the highly competitive market of AT, BCIs must do several jobs, and do them well. This dissertation presents the design and testing of the world's first plug-and-play BCI, capable of interfacing with many existing AT devices in addition to most personal computers. Both a communication task and an environmental control task were tested using the system. The communication test indicates that the plug-and-play BCI can be used to operate AT communication devices with minimal performance cost (95% confidence bounds indicate an accuracy difference smaller than 3.5 percentage points). The control test indicates that the plug-and-play BCI can be used to operate a wheelchair seating system with small performance costs (accuracy difference less than 9 percentage points). The dissertation also includes insights into the issue of performance measurement in the BCI field. A review and critique of existing BCI performance metrics was performed, including a comparison based on data from earlier experiments. Based on this comparison, Information Transfer Rate and BCI-Utility are suggested for broad use in the BCI field. Finally, a novel method of accuracy estimation, using classifier-based latency estimation (CBLE) is developed and presented. The accuracy estimates from the new method are significantly more correlated with daily accuracy than estimates based on either training accuracy (r = 0.64 vs. 0.2, p < 0.05) or accuracy on a small dataset (r = 0.74 vs. 0.16, p < 0.05). As BCI experiments often have relatively small datasets, the method has potential to increase the power of many experiments across the field. In addition to addressing broadly applicable performance measurement issues, the dissertation thus increases the available options for BCI users while preserving performance.PHDBiomedical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/94006/1/dthomp_1.pd
    corecore