16 research outputs found

    On the Relative Contribution of Deep Convolutional Neural Networks for SSVEP-based Bio-Signal Decoding in BCI Speller Applications

    Get PDF
    Brain-computer interfaces (BCI) harnessing Steady State Visual Evoked Potentials (SSVEP) manipulate the frequency and phase of visual stimuli to generate predictable oscillations in neural activity. For BCI spellers, oscillations are matched with alphanumeric characters allowing users to select target numbers and letters. Advances in BCI spellers can, in part, be accredited to subject-speci?c optimization, including; 1) custom electrode arrangements, 2) ?lter sub-band assessments and 3) stimulus parameter tuning. Here we apply deep convolutional neural networks (DCNN) demonstrating cross-subject functionality for the classi?cation of frequency and phase encoded SSVEP. Electroencephalogram (EEG) data are collected and classi?ed using the same parameters across subjects. Subjects ?xate forty randomly cued ?ickering characters (5 ×8 keyboard array) during concurrent wet-EEG acquisition. These data are provided by an open source SSVEP dataset. Our proposed DCNN, PodNet, achieves 86% and 77% of?ine Accuracy of Classi?cation across-subjects for two data capture periods, respectively, 6-seconds (information transfer rate= 40bpm) and 2-seconds (information transfer rate= 101bpm). Subjects demonstrating sub-optimal (< 70%) performance are classi?ed to similar levels after a short subject-speci?c training period. PodNet outperforms ?lter-bank canonical correlation analysis (FBCCA) for a low volume (3channel) clinically feasible occipital electrode con?guration. The networks de?ned in this study achieve functional performance for the largest number of SSVEP classes decoded via DCNN to date. Our results demonstrate PodNet achieves cross-subject, calibrationless classi?cation and adaptability to sub-optimal subject data and low-volume EEG electrode arrangements

    Classification of Frequency and Phase Encoded Steady State Visual Evoked Potentials for Brain Computer Interface Speller Applications using Convolutional Neural Networks

    Get PDF
    Over the past decade there have been substantial improvements in vision based Brain-Computer Interface (BCI) spellers for quadriplegic patient populations. This thesis contains a review of the numerous bio-signals available to BCI researchers, as well as a brief chronology of foremost decoding methodologies used to date. Recent advances in classification accuracy and information transfer rate can be primarily attributed to time consuming patient specific parameter optimization procedures. The aim of the current study was to develop analysis software with potential ‘plug-in-and-play’ functionality. To this end, convolutional neural networks, presently established as state of the art analytical techniques for image processing, were utilized. The thesis herein defines deep convolutional neural network architecture for the offline classification of phase and frequency encoded SSVEP bio-signals. Networks were trained using an extensive 35 participant open source Electroencephalographic (EEG) benchmark dataset (Department of Bio-medical Engineering, Tsinghua University, Beijing). Average classification accuracies of 82.24% and information transfer rates of 22.22 bpm were achieved on a BCI naïve participant dataset for a 40 target alphanumeric display, in absence of any patient specific parameter optimization

    Development of a Practical Visual-Evoked Potential-Based Brain-Computer Interface

    Get PDF
    There are many different neuromuscular disorders that disrupt the normal communication pathways between the brain and the rest of the body. These diseases often leave patients in a `locked-in state, rendering them unable to communicate with their environment despite having cognitively normal brain function. Brain-computer interfaces (BCIs) are augmentative communication devices that establish a direct link between the brain and a computer. Visual evoked potential (VEP)- based BCIs, which are dependent upon the use of salient visual stimuli, are amongst the fastest BCIs available and provide the highest communication rates compared to other BCI modalities. However. the majority of research focuses solely on improving the raw BCI performance; thus, most visual BCIs still suffer from a myriad of practical issues that make them impractical for everyday use. The focus of this dissertation is on the development of novel advancements and solutions that increase the practicality of VEP-based BCIs. The presented work shows the results of several studies that relate to characterizing and optimizing visual stimuli. improving ergonomic design. reducing visual irritation, and implementing a practical VEP-based BCI using an extensible software framework and mobile devices platforms

    TOWARDS STEADY-STATE VISUALLY EVOKED POTENTIALS BRAIN-COMPUTER INTERFACES FOR VIRTUAL REALITY ENVIRONMENTS EXPLICIT AND IMPLICIT INTERACTION

    Get PDF
    In the last two decades, Brain-Computer Interfaces (BCIs) have been investigated mainly for the purpose of implementing assistive technologies able to provide new channels for communication and control for people with severe disabilities. Nevertheless, more recently, thanks to technical and scientific advances in the different research fields involved, BCIs are gaining greater attention also for their adoption by healthy users, as new interaction devices. This thesis is dedicated to to the latter goal and in particular will deal with BCIs based on the Steady State Visual Evoked Potential (SSVEP), which in previous works demonstrated to be one of the most flexible and reliable approaches. SSVEP based BCIs could find applications in different contexts, but one which is particularly interesting for healthy users, is their adoption as new interaction devices for Virtual Reality (VR) environments and Computer Games. Although being investigated since several years, BCIs still poses several limitations in terms of speed, reliability and usability with respect to ordinary interaction devices. Despite of this, they may provide additional, more direct and intuitive, explicit interaction modalities, as well as implicit interaction modalities otherwise impossible with ordinary devices. This thesis, after a comprehensive review of the different research fields being the basis of a BCI exploiting the SSVEP modality, present a state-of-the-art open source implementation using a mix of pre-existing and custom software tools. The proposed implementation, mainly aimed to the interaction with VR environments and Computer Games, has then been used to perform several experiments which are hereby described as well. Initially performed experiments aim to stress the validity of the provided implementation, as well as to show its usability with a commodity bio-signal acquisition device, orders of magnitude less expensive than commonly used ones, representing a step forward in the direction of practical BCIs for end users applications. The proposed implementation, thanks to its flexibility, is used also to perform novel experiments aimed to investigate the exploitation of stereoscopic displays to overcome a known limitation of ordinary displays in the context of SSVEP based BCIs. Eventually, novel experiments are presented investigating the use of the SSVEP modality to provide also implicit interaction. In this context, a first proof of concept Passive BCI based on the SSVEP response is presented and demonstrated to provide information exploitable for prospective applications

    SSVEP-based brain-computer interface for computer control application using SVM classifier

    Get PDF
    n this research, a Brain Computer Interface (BCI) based on Steady State Visually Evoked Potential (SSVEP) for computer control appli-cations using Support Vector Machine (SVM) is presented. For many years, people have speculated that electroencephalographic activi-ties or other electrophysiological measures of brain function might provide a new non-muscular channel that can be used for sending messages or commands to the external world. BCI is a fast-growing emergent technology in which researchers aim to build a direct channel between the human brain and the computer. BCI systems provide a new communication channel for disabled people. Among many different types of the BCI systems, the SSVEP based has attracted more attention due to its ease of use and signal processing. SSVEPs are usually detected from the occipital lobe of the brain when the subject is looking at a twinkling light source. In this paper, SVM is used to classify SSVEP based on electroencephalogram data with proper features. Based on the experiment utilizing a 14-channel Electroencephalography (EEG) device, 80 percent of accuracy can be reached by our SSVEP-based BCI system using Linear SVM Kernel as classification engine

    On Tackling Fundamental Constraints in Brain-Computer Interface Decoding via Deep Neural Networks

    Get PDF
    A Brain-Computer Interface (BCI) is a system that provides a communication and control medium between human cortical signals and external devices, with the primary aim to assist or to be used by patients who suffer from a neuromuscular disease. Despite significant recent progress in the area of BCI, there are numerous shortcomings associated with decoding Electroencephalography-based BCI signals in real-world environments. These include, but are not limited to, the cumbersome nature of the equipment, complications in collecting large quantities of real-world data, the rigid experimentation protocol and the challenges of accurate signal decoding, especially in making a system work in real-time. Hence, the core purpose of this work is to investigate improving the applicability and usability of BCI systems, whilst preserving signal decoding accuracy. Recent advances in Deep Neural Networks (DNN) provide the possibility for signal processing to automatically learn the best representation of a signal, contributing to improved performance even with a noisy input signal. Subsequently, this thesis focuses on the use of novel DNN-based approaches for tackling some of the key underlying constraints within the area of BCI. For example, recent technological improvements in acquisition hardware have made it possible to eliminate the pre-existing rigid experimentation procedure, albeit resulting in noisier signal capture. However, through the use of a DNN-based model, it is possible to preserve the accuracy of the predictions from the decoded signals. Moreover, this research demonstrates that by leveraging DNN-based image and signal understanding, it is feasible to facilitate real-time BCI applications in a natural environment. Additionally, the capability of DNN to generate realistic synthetic data is shown to be a potential solution in reducing the requirement for costly data collection. Work is also performed in addressing the well-known issues regarding subject bias in BCI models by generating data with reduced subject-specific features. The overall contribution of this thesis is to address the key fundamental limitations of BCI systems. This includes the unyielding traditional experimentation procedure, the mandatory extended calibration stage and sustaining accurate signal decoding in real-time. These limitations lead to a fragile BCI system that is demanding to use and only suited for deployment in a controlled laboratory. Overall contributions of this research aim to improve the robustness of BCI systems and enable new applications for use in the real-world

    Development of a practical and mobile brain-computer communication device for profoundly paralyzed individuals

    Full text link
    Thesis (Ph.D.)--Boston UniversityBrain-computer interface (BCI) technology has seen tremendous growth over the past several decades, with numerous groundbreaking research studies demonstrating technical viability (Sellers et al., 2010; Silvoni et al., 2011). Despite this progress, BCIs have remained primarily in controlled laboratory settings. This dissertation proffers a blueprint for translating research-grade BCI systems into real-world applications that are noninvasive and fully portable, and that employ intelligent user interfaces for communication. The proposed architecture is designed to be used by severely motor-impaired individuals, such as those with locked-in syndrome, while reducing the effort and cognitive load needed to communicate. Such a system requires the merging of two primary research fields: 1) electroencephalography (EEG)-based BCIs and 2) intelligent user interface design. The EEG-based BCI portion of this dissertation provides a history of the field, details of our software and hardware implementation, and results from an experimental study aimed at verifying the utility of a BCI based on the steady-state visual evoked potential (SSVEP), a robust brain response to visual stimulation at controlled frequencies. The visual stimulation, feature extraction, and classification algorithms for the BCI were specially designed to achieve successful real-time performance on a laptop computer. Also, the BCI was developed in Python, an open-source programming language that combines programming ease with effective handling of hardware and software requirements. The result of this work was The Unlock Project app software for BCI development. Using it, a four-choice SSVEP BCI setup was implemented and tested with five severely motor-impaired and fourteen control participants. The system showed a wide range of usability across participants, with classification rates ranging from 25-95%. The second portion of the dissertation discusses the viability of intelligent user interface design as a method for obtaining a more user-focused vocal output communication aid tailored to motor-impaired individuals. A proposed blueprint of this communication "app" was developed in this dissertation. It would make use of readily available laptop sensors to perform facial recognition, speech-to-text decoding, and geo-location. The ultimate goal is to couple sensor information with natural language processing to construct an intelligent user interface that shapes communication in a practical SSVEP-based BCI

    A novel multiple time-frequency sequential coding strategy for hybrid brain-computer interface

    Get PDF
    BackgroundFor brain-computer interface (BCI) communication, electroencephalography provides a preferable choice due to its high temporal resolution and portability over other neural recording techniques. However, current BCIs are unable to sufficiently use the information from time and frequency domains simultaneously. Thus, we proposed a novel hybrid time-frequency paradigm to investigate better ways of using the time and frequency information.MethodWe adopt multiple omitted stimulus potential (OSP) and steady-state motion visual evoked potential (SSMVEP) to design the hybrid paradigm. A series of pre-experiments were undertaken to study factors that would influence the feasibility of the hybrid paradigm and the interaction between multiple features. After that, a novel Multiple Time-Frequencies Sequential Coding (MTFSC) strategy was introduced and explored in experiments.ResultsOmissions with multiple short and long durations could effectively elicit time and frequency features, including the multi-OSP, ERP, and SSVEP in this hybrid paradigm. The MTFSC was feasible and efficient. The preliminary online analysis showed that the accuracy and the ITR of the nine-target stimulator over thirteen subjects were 89.04% and 36.37 bits/min.SignificanceThis study first combined the SSMVEP and multi-OSP in a hybrid paradigm to produce robust and abundant time features for coding BCI. Meanwhile, the MTFSC proved feasible and showed great potential in improving performance, such as expanding the number of BCI targets by better using time information in specific stimulated frequencies. This study holds promise for designing better BCI systems with a novel coding method

    Investigation into Stand-alone Brain-computer Interfaces for Musical Applications

    Get PDF
    Brain-computer interfaces (BCIs) aim to establish a communication medium that is independent of muscle control. This project investigates how BCIs can be harnessed for musical applications. The impact of such systems is twofold — (i) it offers a novel mechanism of control for musicians during performance and (ii) it is beneficial for patients who are suffering from motor disabilities. Several challenges are encountered when attempting to move these technologies from laboratories to real-world scenarios. Additionally, BCIs are significantly different from conventional computer interfaces and realise low communication rates. This project considers these challenges and uses a dry and wireless electroencephalogram (EEG) headset to detect neural activity. It adopts a paradigm called steady state visually evoked potential (SSVEP) to provide the user with control. It aims to encapsulate all braincomputer music interface (BCMI)-based operations into a stand-alone application, which would improve the portability of BCMIs. This projects addresses various engineering problems that are faced while developing a stand-alone BCMI. In order to efficiently present the visual stimulus for SSVEP, it requires hardware-accelerated rendering. EEG data is received from the headset through Bluetooth and thus, a dedicated thread is designed to receive signals. As this thesis is not using medical-grade equipment to detect EEG, signal processing techniques need to be examined to improve the signal to noise ratio (SNR) of brain waves. This projects adopts canonical correlation analysis (CCA), which is multi-variate statistical technique and explores filtering algorithms to improve communication rates of BCMIs. Furthermore, this project delves into optimising biomedical engineering-based parameters, such as placement of the EEG headset and size of the visual stimulus. After implementing the optimisations, for a time window of 4s and 2s, the mean accuracies of the BCMI are 97.92±2.22% and 88.02±9.30% respectively. The obtained information transfer rate (ITR) is 36.56±9.17 bits min-1, which surpasses communication rates of earlier BCMIs. This thesis concludes by building a system which encompasses a novel control flow, which allows the user to play a musical instrument by gazing at it.The School of Humanities and Performing Arts, University of Plymout
    corecore