111 research outputs found

    Development of a Non-Invasive Brain-Computer Interface for Neurorehabilitation

    Get PDF
    Neurological disorders, in particular Stroke, have an impact on many individuals worldwide. These individuals are often left with residual motor control in their upper limbs. Although conventional therapy can aid in recovery, it is not always accessible, and the procedures are dull for the patient. Novel methods of therapy are being developed, including Brain-Computer Interfaces (BCIs). Although BCI research has been flourishing in the past few years, most rehabilitation applications are not yet suitable for clinical practice.This is due to the fact that BCI reliability and validation has not yet been achieved, and few clinical trials have been done with BCIs. Another crucial factor, is that modern BCIs are often comprised of inconvenient hardware and software. This is a major factor of aversion from both patients and clinicians. This Master Dissertation introduces the EmotivBCI: an easy to use platform for Electroencephalogram acquisition, processing and classification of sensorimotor rhythms with respect to motor action and motor imagery. The acquisition of EEG is done through 8 channels of the Emotiv Epoc wireless headset. Signals are pre-processed, and the 2 best combinations of channel/frequency pairs that exhibit the greatest spectral variation between the rest and action conditions are extracted for different time frames. These features are then used to build a feature matrix with 2 sets of attributes and 2 class labels. Finally the resulting feature matrix is used to train 3 different classifiers, in which the best is selected. The EmotivBCI enables users to keep record of their performances, and provides additional features to further examine training sessions. To assess the performance of the EmotivBCI, two studies were conducted with healthy individuals. The first study compares classification accuracies between two different training paradigms. The second study evaluates the progress in performance of a group of individuals after several training sessions

    A machine learning approach to taking EEG-based brain-computer interfaces out of the lab

    Get PDF
    Despite being a subject of study for almost three decades, non-invasive brain- computer interfaces (BCIs) are still trapped in the laboratory. In order to move into more common use, it is necessary to have systems that can be reliably used over time with a minimum of retraining. My research focuses on machine learning methods to minimize necessary retraining, as well as a data science approach to validate processing pipelines more robustly. Via a probabilistic transfer learning method that scales well to large amounts of data in high dimensions it is possible to reduce the amount of calibration data needed for optimal performance. However, a good model still requires reliable features that are resistant to recording artifacts. To this end we have also investigated a novel feature of the electroencephalogram which is predictive of multiple types of brain-related activity. As cognitive neuroscience literature suggests, shifts in the peak frequency of a neural oscillation – hereafter referred to as frequency modulation – can be predictive of activity in standard BCI tasks, which we validate for the first time in multiple paradigms. Finally, in order to test the robustness of our techniques, we have built a codebase for reliable comparison of pipelines across over fifteen open access EEG datasets

    A Hybrid Brain-Computer Interface Based on Electroencephalography and Functional Transcranial Doppler Ultrasound

    Get PDF
    Hybrid brain computer interfaces (BCIs) combining multiple brain imaging modalities have been proposed recently to boost the performance of single modality BCIs. We advance the state of hybrid BCIs by introducing a novel system that measures electrical brain activity as well as cerebral blood flow velocity using Electroencephalography (EEG) and functional transcranial Doppler ultrasound (fTCD), respectively. The system we developed employs two different paradigms to induce changes simultaneously in EEG and fTCD and to infer user intent. One of these paradigms includes visual stimuli to simultaneously induce steady state visually evoked potentials (SSVEPs) and instructs users to perform word generation (WG) and mental rotation (MR) tasks, while the other paradigm instructs users to perform left and right arm motor imagery (MI) tasks through visual stimuli. To improve accuracy and information transfer rate (ITR) of the proposed system compared to those obtained through our preliminary analysis, using classical feature extraction approaches, we mainly contribute to multi-modal fusion of EEG and fTCD features. Specifically, we proposed a probabilistic fusion of EEG and fTCD evidences instead of simple concatenation of EEG and fTCD feature vectors that we performed in our preliminary analysis. Experimental results showed that the MI paradigm outperformed the MR/WG one in terms of both accuracy and ITR. In particular, 93.85%, 93.71%, and 100% average accuracies and 19.89, 26.55, and 40.83 bits/min v average ITRs were achieved for right MI vs baseline, left MI versus baseline, and right MI versus left MI, respectively. Moreover, for both paradigms, the EEG-fTCD BCI with the proposed analysis techniques outperformed all EEG- fNIRS BCIs in terms of accuracy and ITR. In addition, to investigate the feasibility of increasing the possible number of BCI commands, we extended our approaches to solve the 3-class problems for both paradigms. It was found that the MI paradigm outperformed the MR/WG paradigm and achieved 96.58% average accuracy and 45 bits/min average ITR. Finally, we introduced a transfer learning approach to reduce the calibration requirements of the proposed BCI. This approach was found to be very efficient especially with the MI paradigm as it reduced the calibration requirements by at least 60.43%

    On Riemannian tools for classification improvement in Brain-Computer Interfaces

    Get PDF
    A Brain Computer Interface (BCI) or Brain Machine Interface (BMI) is a device that allows the exchange of information between the brain of a person and a computer without the need of physical interaction. This technology promises to change the way in which we interact with machines, but it is not yet affordable, robust or quick enough to substitute other classic human to machine interfaces for the general public. This being said, the lack of need of interaction makes them a very promising solution that would provide people with severe motor disabilities with a new way of interacting with their surroundings, improving their quality of life. The most extended method of extracting information about brain activity and the one used for this project is the Electroencefalogram (EEG). This device consists of multiple electrodes mounted on a helmet-like structure that is placed on the user’s scalp. The electrodes detect the sum of action potentials from large populations of neurons on the brain’s cortex. The main advantages of this technique are the relative low cost of the device, portability, and the high temporal resolution and ease of use of a non invasive technique. This is not free of disadvantages, as the method suffers from a low signal to noise ratio, low robustness to interference, low spatial resolution and the effects of inter and intra session drift, that is, the movement of the electrodes during and between sessions produce variations on the acquisition of the signal. There are also multiple paradigms in the field of BCI, each one of them focusing on a different brain signal. This work is centered around the Motor Imagery Brain Computer Interface (MI-BCI), which differs from other BCIs in the fact that it directly decodes the intention of the user without the need of inducing a specific response in the brain by presenting an stimulus. This approach is considered to be more natural and can be more comfortable, but also requires a higher level of mental effort and proficiency from part of the user. The MI-BCI is based on a signal of unknown origin that is produced on the sensorymotor cortex, responsible for voluntary movements and touch among others, the Sensorimotor Rhythms (SMR). This signal is atenuated when the person performs or thinks about performing a movement, which is called an Event Related Desynchronization (ERD) and amplified when going back to the idling state, an Event Related Synchronization (ERS). As the brain is a distributed system, the origin of these events can be estimated and is related to the movement that the person imagined. In an implementation, these movements are limited to a discrete set of posibilities and each one is mapped to a computer instruction, allowing the unidirectional transfer of information between brain and machine. The classical machine learning approach to this problem has been to use very specific signal processing techniques to extract relevant features for this problem that can then be fed to a general classification algorithm. The main tecnique is known as Common Spatial Patterns (CSP) followed by classification with Linear Discriminant Analysis (LDA) or Support Vector Machine (SVM). This has some advantages such as a relative low requirement of training samples, but also lacks the capability of generalisation, and a system fine tuned for one user cannot be used for other users or even for another session from the same user reliably. In this work we study an alternative framework that uses the covariance matrices of the EEG signals as observations and exploits the Riemannian geometry of Symmetric Positive Definite (SPD) matrices to classify them in their natural space. This is not only a more general signal processing approach that has been used in other fields of research, but also opens the possibility of transfering some information between users and sessions, which may result in a more robust system or in a system that requires less data for training. This is crucial for the usability of MI-BCI because recording a training session before each use of the system is mentally exhausting and time consuming.Universidad de Sevilla. Máster Universitario en Ingeniería de Telecomunicació

    Brain-computer interfaces with machine learning : a symbiotic approach

    Get PDF

    EEG source imaging for improved control BCI performance

    Get PDF

    EEG dataset and OpenBMI toolbox for three BCI paradigms: an investigation into BCI illiteracy

    Get PDF
    https://academic.oup.com/gigascience/article/8/5/giz002/5304369Background Electroencephalography (EEG)-based brain-computer interface (BCI) systems are mainly divided into three major paradigms: motor imagery (MI), event-related potential (ERP), and steady-state visually evoked potential (SSVEP). Here, we present a BCI dataset that includes the three major BCI paradigms with a large number of subjects over multiple sessions. In addition, information about the psychological and physiological conditions of BCI users was obtained using a questionnaire, and task-unrelated parameters such as resting state, artifacts, and electromyography of both arms were also recorded. We evaluated the decoding accuracies for the individual paradigms and determined performance variations across both subjects and sessions. Furthermore, we looked for more general, severe cases of BCI illiteracy than have been previously reported in the literature. Results Average decoding accuracies across all subjects and sessions were 71.1% (± 0.15), 96.7% (± 0.05), and 95.1% (± 0.09), and rates of BCI illiteracy were 53.7%, 11.1%, and 10.2% for MI, ERP, and SSVEP, respectively. Compared to the ERP and SSVEP paradigms, the MI paradigm exhibited large performance variations between both subjects and sessions. Furthermore, we found that 27.8% (15 out of 54) of users were universally BCI literate, i.e., they were able to proficiently perform all three paradigms. Interestingly, we found no universally illiterate BCI user, i.e., all participants were able to control at least one type of BCI system. Conclusions Our EEG dataset can be utilized for a wide range of BCI-related research questions. All methods for the data analysis in this study are supported with fully open-source scripts that can aid in every step of BCI technology. Furthermore, our results support previous but disjointed findings on the phenomenon of BCI illiteracy

    Using brain-computer interaction and multimodal virtual-reality for augmenting stroke neurorehabilitation

    Get PDF
    Every year millions of people suffer from stroke resulting to initial paralysis, slow motor recovery and chronic conditions that require continuous reha bilitation and therapy. The increasing socio-economical and psychological impact of stroke makes it necessary to find new approaches to minimize its sequels, as well as novel tools for effective, low cost and personalized reha bilitation. The integration of current ICT approaches and Virtual Reality (VR) training (based on exercise therapies) has shown significant improve ments. Moreover, recent studies have shown that through mental practice and neurofeedback the task performance is improved. To date, detailed in formation on which neurofeedback strategies lead to successful functional recovery is not available while very little is known about how to optimally utilize neurofeedback paradigms in stroke rehabilitation. Based on the cur rent limitations, the target of this project is to investigate and develop a novel upper-limb rehabilitation system with the use of novel ICT technolo gies including Brain-Computer Interfaces (BCI’s), and VR systems. Here, through a set of studies, we illustrate the design of the RehabNet frame work and its focus on integrative motor and cognitive therapy based on VR scenarios. Moreover, we broadened the inclusion criteria for low mobility pa tients, through the development of neurofeedback tools with the utilization of Brain-Computer Interfaces while investigating the effects of a brain-to-VR interaction.Todos os anos, milho˜es de pessoas sofrem de AVC, resultando em paral isia inicial, recupera¸ca˜o motora lenta e condic¸˜oes cr´onicas que requerem re abilita¸ca˜o e terapia cont´ınuas. O impacto socioecon´omico e psicol´ogico do AVC torna premente encontrar novas abordagens para minimizar as seque las decorrentes, bem como desenvolver ferramentas de reabilita¸ca˜o, efetivas, de baixo custo e personalizadas. A integra¸c˜ao das atuais abordagens das Tecnologias da Informa¸ca˜o e da Comunica¸ca˜o (TIC) e treino com Realidade Virtual (RV), com base em terapias por exerc´ıcios, tem mostrado melhorias significativas. Estudos recentes mostram, ainda, que a performance nas tare fas ´e melhorada atrav´es da pra´tica mental e do neurofeedback. At´e a` data, na˜o existem informac¸˜oes detalhadas sobre quais as estrat´egias de neurofeed back que levam a uma recupera¸ca˜o funcional bem-sucedida. De igual modo, pouco se sabe acerca de como utilizar, de forma otimizada, o paradigma de neurofeedback na recupera¸c˜ao de AVC. Face a tal, o objetivo deste projeto ´e investigar e desenvolver um novo sistema de reabilita¸ca˜o de membros supe riores, recorrendo ao uso de novas TIC, incluindo sistemas como a Interface C´erebro-Computador (ICC) e RV. Atrav´es de um conjunto de estudos, ilus tramos o design do framework RehabNet e o seu foco numa terapia motora e cognitiva, integrativa, baseada em cen´arios de RV. Adicionalmente, ampli amos os crit´erios de inclus˜ao para pacientes com baixa mobilidade, atrav´es do desenvolvimento de ferramentas de neurofeedback com a utilizac¸˜ao de ICC, ao mesmo que investigando os efeitos de uma interac¸˜ao c´erebro-para-RV

    A Bayesian machine learning framework for true zero-training brain-computer interfaces

    Get PDF
    Brain-Computer Interfaces (BCI) are developed to allow the user to take control of a computer (e.g. a spelling application) or a device (e.g. a robotic arm) by using just his brain signals. The concept of BCI was introduced in 1973 by Jacques Vidal. The early types of BCI relied on tedious user training to enable them to modulate their brain signals such that they can take control over the computer. Since then, training has shifted from the user to the computer. Hence, modern BCI systems rely on a calibration session, during which the user is instructed to perform specific tasks. The result of this calibration recording is a labelled data-set that can be used to train the (supervised) machine learning algorithm. Such a calibration recording is, however, of no direct use for the end user. Hence, it is especially important for patients to limit this tedious process. For this reason, the BCI community has invested a lot of effort in reducing the dependency on calibration data. Nevertheless, despite these efforts, true zero-training BCIs are rather rare. Event-Related Potential based spellers One of the most common types of BCI is the Event-Related Potentials (ERP) based BCI, which was invented by Farwell and Donchin in 1988. In the ERP-BCI, actions, such as spelling a letter, are coupled to specific stimuli. The computer continuously presents these stimuli to the user. By attending a specific stimulus, the user is able to select an action. More concretely, in the original ERP-BCI, these stimuli were the intensifications of rows and column in a matrix of symbols on a computer screen. By detecting which row and which column elicit an ERP response, the computer can infer which symbol the user wants to spell. Initially, the ERP-BCI was aimed at restoring communication, but novel applications have been proposed too. Examples are web browsing, gaming, navigation and painting. Additionally, current BCIs are not limited to using visual stimuli, but variations using auditory or tactile stimuli have been developed as well. In their quest to improve decoding performance in the ERP-BCI, the BCI community has developed increasingly more complex machine learning algorithms. However, nearly all of them rely on intensive subject-specific fine-tuning. The current generation of decoders has gone beyond a standard ERP classifier and they incorporate language models, which are similar to a spelling corrector on a computer, and extensions to speed up the communication, commonly referred to as dynamic stopping. Typically, all these different components are separate entities that have to be tied together by heuristics. This introduces an additional layer of complexity and the result is that these state of the art methods are difficult to optimise due to the large number of free parameters. We have proposed a single unified probabilistic model that integrates language models and a natural dynamic stopping strategy. This coherent model is able to achieve state of the art performance, while at the same time, minimising the complexity of subject-specific tuning on labelled data. A second and major contribution of this thesis is the development of the first unsupervised decoder for ERP spellers. Recall that typical decoders have to be tuned on labelled data for each user individually. Moreover, recording this labelled data is a tedious process, which has no direct use for the end user. The unsupervised approach, which is an extension of our unified probabilistic model, is able to learn how to decode a novel user’s brain signals without requiring such a labelled dataset. Instead, the user starts using the system and in the meantime the decoder is learning how to decode the brain signals. This method has been evaluated extensively, both in an online and offline setting. Our offline validation was executed on three different datasets of visual ERP data in the standard matrix speller. Combined, these datasets contain 25 different subjects. Additionally, we present the results of an offline evaluation on auditory ERP data from 21 subjects. Due to a less clear signal, this auditory ERP data present an even greater challenge than visual ERP data. On top of that we present the results from an online study on auditory ERP, which was conducted in cooperation with Michael Tangermann, Martijn Schreuder and Klaus-Robert Müller at the TU-Berlin. Our simulations indicate that when enough unlabelled data is available, the unsupervised method can compete with state of the art supervised approaches. Furthermore, when non-stationarity is present in the EEG recordings, e.g. due to fatigue during longer experiments, then the unsupervised approach can outperform supervised methods by adapting to these changes in the data. However, the limitation of the unsupervised method lies in the fact that while labelled data is not required, a substantial amount of unlabelled data must be processed before a reliable model can be found. Hence, during online experiments the model suffers from a warm-up period. During this warm-up period, the output is unreliable, but the mistakes made during this warm-up period can be corrected automatically when enough data is processed. To maximise the usability of ERP-BCI, the warm-up of the unsupervised method has to be minimised. For this reason, we propose one of the first transfer learning methods for ERP-BCI. The idea behind transfer learning is to share information on how to decode the brain signals between users. The concept of transfer learning stands in stark contrast with the strong tradition of subject-specific decoders commonly used by the BCI community. Nevertheless, by extending our unified model with inter-subject transfer learning, we are able to build a decoder that can decode the brain signals of novel users without any subject-specific training. Unfortunately, basic transfer learning models do perform as well as subject-specific (supervised models). For this reason, we have combined our transfer learning approach with our unsupervised learning approach to adapt it during usage to a highly accurate subject-specific model. Analogous to our unsupervised model, we have performed an extensive evaluation of transfer learning with unsupervised adaptation. We tested the model offline on visual ERP data from 22 subjects and on auditory ERP data from 21 subjects. Additionally, we present the results from an online study, which was also performed at the TUBerlin, where we evaluate transfer learning online on the auditory AMUSE paradigm. From these experiments, we can conclude that transfer learning in combination with unsupervised adaptation results in a true zero training BCI, that can compete with state of the art supervised models, without needing a single data point from a calibration recording. This method allows us to build a BCI that works out of the box
    corecore