20 research outputs found

    Data Analytics in Steady-State Visual Evoked Potential-based Brain-Computer Interface: A Review

    Get PDF
    Electroencephalograph (EEG) has been widely applied for brain-computer interface (BCI) which enables paralyzed people to directly communicate with and control of external devices, due to its portability, high temporal resolution, ease of use and low cost. Of various EEG paradigms, steady-state visual evoked potential (SSVEP)-based BCI system which uses multiple visual stimuli (such as LEDs or boxes on a computer screen) flickering at different frequencies has been widely explored in the past decades due to its fast communication rate and high signal-to-noise ratio. In this paper, we review the current research in SSVEP-based BCI, focusing on the data analytics that enables continuous, accurate detection of SSVEPs and thus high information transfer rate. The main technical challenges, including signal pre-processing, spectrum analysis, signal decomposition, spatial filtering in particular canonical correlation analysis and its variations, and classification techniques are described in this paper. Research challenges and opportunities in spontaneous brain activities, mental fatigue, transfer learning as well as hybrid BCI are also discussed

    Classification of Frequency and Phase Encoded Steady State Visual Evoked Potentials for Brain Computer Interface Speller Applications using Convolutional Neural Networks

    Get PDF
    Over the past decade there have been substantial improvements in vision based Brain-Computer Interface (BCI) spellers for quadriplegic patient populations. This thesis contains a review of the numerous bio-signals available to BCI researchers, as well as a brief chronology of foremost decoding methodologies used to date. Recent advances in classification accuracy and information transfer rate can be primarily attributed to time consuming patient specific parameter optimization procedures. The aim of the current study was to develop analysis software with potential ‘plug-in-and-play’ functionality. To this end, convolutional neural networks, presently established as state of the art analytical techniques for image processing, were utilized. The thesis herein defines deep convolutional neural network architecture for the offline classification of phase and frequency encoded SSVEP bio-signals. Networks were trained using an extensive 35 participant open source Electroencephalographic (EEG) benchmark dataset (Department of Bio-medical Engineering, Tsinghua University, Beijing). Average classification accuracies of 82.24% and information transfer rates of 22.22 bpm were achieved on a BCI naïve participant dataset for a 40 target alphanumeric display, in absence of any patient specific parameter optimization

    Development of a practical and mobile brain-computer communication device for profoundly paralyzed individuals

    Full text link
    Thesis (Ph.D.)--Boston UniversityBrain-computer interface (BCI) technology has seen tremendous growth over the past several decades, with numerous groundbreaking research studies demonstrating technical viability (Sellers et al., 2010; Silvoni et al., 2011). Despite this progress, BCIs have remained primarily in controlled laboratory settings. This dissertation proffers a blueprint for translating research-grade BCI systems into real-world applications that are noninvasive and fully portable, and that employ intelligent user interfaces for communication. The proposed architecture is designed to be used by severely motor-impaired individuals, such as those with locked-in syndrome, while reducing the effort and cognitive load needed to communicate. Such a system requires the merging of two primary research fields: 1) electroencephalography (EEG)-based BCIs and 2) intelligent user interface design. The EEG-based BCI portion of this dissertation provides a history of the field, details of our software and hardware implementation, and results from an experimental study aimed at verifying the utility of a BCI based on the steady-state visual evoked potential (SSVEP), a robust brain response to visual stimulation at controlled frequencies. The visual stimulation, feature extraction, and classification algorithms for the BCI were specially designed to achieve successful real-time performance on a laptop computer. Also, the BCI was developed in Python, an open-source programming language that combines programming ease with effective handling of hardware and software requirements. The result of this work was The Unlock Project app software for BCI development. Using it, a four-choice SSVEP BCI setup was implemented and tested with five severely motor-impaired and fourteen control participants. The system showed a wide range of usability across participants, with classification rates ranging from 25-95%. The second portion of the dissertation discusses the viability of intelligent user interface design as a method for obtaining a more user-focused vocal output communication aid tailored to motor-impaired individuals. A proposed blueprint of this communication "app" was developed in this dissertation. It would make use of readily available laptop sensors to perform facial recognition, speech-to-text decoding, and geo-location. The ultimate goal is to couple sensor information with natural language processing to construct an intelligent user interface that shapes communication in a practical SSVEP-based BCI

    Improved Brain-Computer Interface Methods with Application to Gaming

    Get PDF

    Towards a home-use BCI: fast asynchronous control and robust non-control state detection

    Get PDF
    Eine Hirn-Computer Schnittstelle (engl. Brain-Computer Interface, BCI) erlaubt einem Nutzer einen Computer nur mittels Gehirn-AktivitĂ€t zu steuern. Der Hauptanwendungszweck ist die Wiederherstellung verschiedener Funktionen von motorisch eingeschrĂ€nkten Menschen, zum Beispiel, die Wiederherstellung der KommunikationsfĂ€higkeit. Bisherige BCIs die auf visuell evozierten Potentialen (VEPs) basieren, erlauben bereits hohe Kommunikationsgeschwindigkeiten. VEPs sind Reaktionen, die im Gehirn durch visuelle Stimulation hervorgerufen werden. Allerdings werden bisherige BCIs hauptsĂ€chlich in der Forschung verwendet und sind nicht fĂŒr reale Anwendungszwecke geeignet. Grund dafĂŒr ist, dass sie auf dem synchronen Steuerungsprinzip beruhen, dies bedeutet, dass Aktionen nur in vorgegebenen Zeitslots ausgefĂŒhrt werden können. Dies bedeutet wiederum, dass der Nutzer keine Aktionen nach seinem Belieben ausfĂŒhren kann, was fĂŒr reale Anwendungszwecke ein Problem darstellt. Um dieses Problem zu lösen, mĂŒssen BCIs die Intention des Nutzers, das System zu steuern oder nicht, erkennen. Solche BCIs werden asynchron oder selbstbestimmt genannt. Bisherige asynchrone BCIs zeigen allerdings keine ausreichende Genauigkeit bei der Erkennung der Intention und haben zudem eine deutlich reduzierte Kommunikationsgeschwindigkeit im Vergleich zu synchronen BCIs. In dieser Doktorarbeit wird das erste asynchrone BCI vorgestellt, welches sowohl eine annĂ€herungsweise perfekte Erkennung der Intention des Nutzers als auch eine Ă€hnliche Kommunikationsgeschwindigkeit wie synchrone BCIs erzielt. Dies wurde durch die Entwicklung eines allgemeinen Modells fĂŒr die Vorhersage von sensorischen Reizen erzielt. Dadurch können beliebige visuelle Stimulationsmuster basierend auf den gemessenen VEPs vorhergesagt werden. Das Modell wurde sowohl mit einem "traditionellen" maschinellen Lernverfahren als auch mit einer deep-learning Methode implementiert und evaluiert. Das resultierende asynchrone BCI ĂŒbertrifft bisherige Methoden in mehreren Disziplinen um ein Vielfaches und ist ein wesentlicher Schritt, um BCI-Anwendungen aus dem Labor in die Praxis zu bringen. Durch weitere Optimierungen, die in dieser Arbeit diskutiert werden, könnte es sich zum allerersten geeigneten BCI fĂŒr Endanwender entwickeln, da es effektiv (hohe Genauigkeit), effizient (schnelle Klassifizierungen), und einfach zu bedienen ist. Ein weiteres Alleinstellungsmerkmal ist, dass das entwickelte BCI fĂŒr beliebige Szenarien verwendet werden kann, da es annĂ€hernd unendlich viele gleichzeitige Aktionsfelder erlaubt.Brain-Computer Interfaces (BCIs) enable users to control a computer by using pure brain activity. Their main purpose is to restore several functionalities of motor disabled people, for example, to restore the communication ability. Recent BCIs based on visual evoked potentials (VEPs), which are brain responses to visual stimuli, have shown to achieve high-speed communication. However, BCIs have not really found their way out of the lab yet. This is mainly because all recent high-speed BCIs are based on synchronous control, which means commands can only be executed in time slots controlled by the BCI. Therefore, the user is not able to select a command at his own convenience, which poses a problem in real-world applications. Furthermore, all those BCIs are based on stimulation paradigms which restrict the number of possible commands. To be suitable for real-world applications, a BCI should be asynchronous, or also called self-paced, and must be able to identify the user’s intent to control the system or not. Although there some asynchronous BCI approaches, none of them achieved suitable real-world performances. In this thesis, the first asynchronous high-speed BCI is proposed, which allows using a virtually unlimited number of commands. Furthermore, it achieved a nearly perfect distinction between intentional control (IC) and non-control (NC), which means commands are only executed if the user intends to. This was achieved by a completely different approach, compared to recent methods. Instead of using a classifier trained on specific stimulation patterns, the presented approach is based on a general model that predicts arbitrary stimulation patterns. The approach was evaluated with a "traditional" as well as a deep machine learning method. The resultant asynchronous BCI outperforms recent methods by a multi-fold in multiple disciplines and is an essential step for moving BCI applications out of the lab and into real life. With further optimization, discussed in this thesis, it could evolve to the very first end-user suitable BCI, as it is effective (high accuracy), efficient (fast classifications), ease of use, and allows to perform as many different tasks as desired

    Brain Computer Interfaces and Emotional Involvement: Theory, Research, and Applications

    Get PDF
    This reprint is dedicated to the study of brain activity related to emotional and attentional involvement as measured by Brain–computer interface (BCI) systems designed for different purposes. A BCI system can translate brain signals (e.g., electric or hemodynamic brain activity indicators) into a command to execute an action in the BCI application (e.g., a wheelchair, the cursor on the screen, a spelling device or a game). These tools have the advantage of having real-time access to the ongoing brain activity of the individual, which can provide insight into the user’s emotional and attentional states by training a classification algorithm to recognize mental states. The success of BCI systems in contemporary neuroscientific research relies on the fact that they allow one to “think outside the lab”. The integration of technological solutions, artificial intelligence and cognitive science allowed and will allow researchers to envision more and more applications for the future. The clinical and everyday uses are described with the aim to invite readers to open their minds to imagine potential further developments

    Advanced Biometrics with Deep Learning

    Get PDF
    Biometrics, such as fingerprint, iris, face, hand print, hand vein, speech and gait recognition, etc., as a means of identity management have become commonplace nowadays for various applications. Biometric systems follow a typical pipeline, that is composed of separate preprocessing, feature extraction and classification. Deep learning as a data-driven representation learning approach has been shown to be a promising alternative to conventional data-agnostic and handcrafted pre-processing and feature extraction for biometric systems. Furthermore, deep learning offers an end-to-end learning paradigm to unify preprocessing, feature extraction, and recognition, based solely on biometric data. This Special Issue has collected 12 high-quality, state-of-the-art research papers that deal with challenging issues in advanced biometric systems based on deep learning. The 12 papers can be divided into 4 categories according to biometric modality; namely, face biometrics, medical electronic signals (EEG and ECG), voice print, and others

    Hybrid wheelchair controller for handicapped and quadriplegic patients

    Get PDF
    In this dissertation, a hybrid wheelchair controller for handicapped and quadriplegic patient is proposed. The system has two sub-controllers which are the voice controller and the head tilt controller. The system aims to help quadriplegic, handicapped, elderly and paralyzed patients to control a robotic wheelchair using voice commands and head movements instead of a traditional joystick controller. The multi-input design makes the system more flexible to adapt to the available body signals. The low-cost design is taken into consideration as it allows more patients to use this system

    Effective EEG analysis for advanced AI-driven motor imagery BCI systems

    Get PDF
    Developing effective signal processing for brain-computer interfaces (BCIs) and brain-machine interfaces (BMIs) involves factoring in three aspects of functionality: classification performance, execution time, and the number of data channels used. The contributions in this thesis are centered on these three issues. Contributions are focused on the classification of motor imagery (MI) data, which is generated during imagined movements. Typically, EEG time-series data is segmented for data augmentation or to mimic buffering that happens in an online BCI. A multi-segment decision fusion approach is presented, which takes consecutive temporal segments of EEG data, and uses decision fusion to boost classification performance. It was computationally lightweight and improved the performance of four conventional classifiers. Also, an analysis of the contributions of electrodes from different scalp regions is presented, and a subset of channels is recommended. Sparse learning (SL) classifiers have exhibited strong classification performance in the literature. However, they are computationally expensive. To reduce the test-set execution times, a novel EEG classification pipeline consisting of a genetic-algorithm (GA) for channel selection and a dictionary-based SL module for classification, called GABSLEEG, is presented. Subject-specific channel selection was carried out, in which the channels are selected based on training data from the subject. Using the GA-recommended subset of EEG channels reduced the execution time by 60% whilst preserving classification performance. Although subject-specific channel selection is widely used in the literature, effective subject-independent channel selection, in which channels are detected using data from other subjects, is an ideal aim because it leads to lower training latency and reduces the number of electrodes needed. A novel convolutional neural network (CNN)-based subject-independent channels selection method is presented, called the integrated channel selection (ICS) layer. It performed on-a-par with or better than subject-specific channel selection. It was computationally efficient, operating 12-17 times faster than the GA channel selection module. The ICS layer method was versatile, performing well with two different CNN architectures and datasets.Developing effective signal processing for brain-computer interfaces (BCIs) and brain-machine interfaces (BMIs) involves factoring in three aspects of functionality: classification performance, execution time, and the number of data channels used. The contributions in this thesis are centered on these three issues. Contributions are focused on the classification of motor imagery (MI) data, which is generated during imagined movements. Typically, EEG time-series data is segmented for data augmentation or to mimic buffering that happens in an online BCI. A multi-segment decision fusion approach is presented, which takes consecutive temporal segments of EEG data, and uses decision fusion to boost classification performance. It was computationally lightweight and improved the performance of four conventional classifiers. Also, an analysis of the contributions of electrodes from different scalp regions is presented, and a subset of channels is recommended. Sparse learning (SL) classifiers have exhibited strong classification performance in the literature. However, they are computationally expensive. To reduce the test-set execution times, a novel EEG classification pipeline consisting of a genetic-algorithm (GA) for channel selection and a dictionary-based SL module for classification, called GABSLEEG, is presented. Subject-specific channel selection was carried out, in which the channels are selected based on training data from the subject. Using the GA-recommended subset of EEG channels reduced the execution time by 60% whilst preserving classification performance. Although subject-specific channel selection is widely used in the literature, effective subject-independent channel selection, in which channels are detected using data from other subjects, is an ideal aim because it leads to lower training latency and reduces the number of electrodes needed. A novel convolutional neural network (CNN)-based subject-independent channels selection method is presented, called the integrated channel selection (ICS) layer. It performed on-a-par with or better than subject-specific channel selection. It was computationally efficient, operating 12-17 times faster than the GA channel selection module. The ICS layer method was versatile, performing well with two different CNN architectures and datasets
    corecore