208 research outputs found

    Development of a Practical Visual-Evoked Potential-Based Brain-Computer Interface

    Get PDF
    There are many different neuromuscular disorders that disrupt the normal communication pathways between the brain and the rest of the body. These diseases often leave patients in a `locked-in state, rendering them unable to communicate with their environment despite having cognitively normal brain function. Brain-computer interfaces (BCIs) are augmentative communication devices that establish a direct link between the brain and a computer. Visual evoked potential (VEP)- based BCIs, which are dependent upon the use of salient visual stimuli, are amongst the fastest BCIs available and provide the highest communication rates compared to other BCI modalities. However. the majority of research focuses solely on improving the raw BCI performance; thus, most visual BCIs still suffer from a myriad of practical issues that make them impractical for everyday use. The focus of this dissertation is on the development of novel advancements and solutions that increase the practicality of VEP-based BCIs. The presented work shows the results of several studies that relate to characterizing and optimizing visual stimuli. improving ergonomic design. reducing visual irritation, and implementing a practical VEP-based BCI using an extensible software framework and mobile devices platforms

    Classification of Frequency and Phase Encoded Steady State Visual Evoked Potentials for Brain Computer Interface Speller Applications using Convolutional Neural Networks

    Get PDF
    Over the past decade there have been substantial improvements in vision based Brain-Computer Interface (BCI) spellers for quadriplegic patient populations. This thesis contains a review of the numerous bio-signals available to BCI researchers, as well as a brief chronology of foremost decoding methodologies used to date. Recent advances in classification accuracy and information transfer rate can be primarily attributed to time consuming patient specific parameter optimization procedures. The aim of the current study was to develop analysis software with potential ‘plug-in-and-play’ functionality. To this end, convolutional neural networks, presently established as state of the art analytical techniques for image processing, were utilized. The thesis herein defines deep convolutional neural network architecture for the offline classification of phase and frequency encoded SSVEP bio-signals. Networks were trained using an extensive 35 participant open source Electroencephalographic (EEG) benchmark dataset (Department of Bio-medical Engineering, Tsinghua University, Beijing). Average classification accuracies of 82.24% and information transfer rates of 22.22 bpm were achieved on a BCI naïve participant dataset for a 40 target alphanumeric display, in absence of any patient specific parameter optimization

    Data Analytics in Steady-State Visual Evoked Potential-based Brain-Computer Interface: A Review

    Get PDF
    Electroencephalograph (EEG) has been widely applied for brain-computer interface (BCI) which enables paralyzed people to directly communicate with and control of external devices, due to its portability, high temporal resolution, ease of use and low cost. Of various EEG paradigms, steady-state visual evoked potential (SSVEP)-based BCI system which uses multiple visual stimuli (such as LEDs or boxes on a computer screen) flickering at different frequencies has been widely explored in the past decades due to its fast communication rate and high signal-to-noise ratio. In this paper, we review the current research in SSVEP-based BCI, focusing on the data analytics that enables continuous, accurate detection of SSVEPs and thus high information transfer rate. The main technical challenges, including signal pre-processing, spectrum analysis, signal decomposition, spatial filtering in particular canonical correlation analysis and its variations, and classification techniques are described in this paper. Research challenges and opportunities in spontaneous brain activities, mental fatigue, transfer learning as well as hybrid BCI are also discussed

    SSVEP-Based BCIs

    Get PDF
    This chapter describes the method of flickering targets, eliciting fundamental frequency changes in the EEG signal of the subject, used to drive machine commands after interpretation of user’s intentions. The steady-state response of the changes in the EEG caused by events such as visual stimulus applied to the subject via a computer screen is called steady-state visually evoked potential (SSVEP). This feature of the EEG signal can be used to form a basis of input to assistive devices for locked in patients to improve their quality of life, as well as for performance enhancing devices for healthy subjects. The contents of this chapter describe the SSVEP stimuli; feature extraction techniques, feature classification techniques and a few applications based on SSVEP based BCI

    Development of a practical and mobile brain-computer communication device for profoundly paralyzed individuals

    Full text link
    Thesis (Ph.D.)--Boston UniversityBrain-computer interface (BCI) technology has seen tremendous growth over the past several decades, with numerous groundbreaking research studies demonstrating technical viability (Sellers et al., 2010; Silvoni et al., 2011). Despite this progress, BCIs have remained primarily in controlled laboratory settings. This dissertation proffers a blueprint for translating research-grade BCI systems into real-world applications that are noninvasive and fully portable, and that employ intelligent user interfaces for communication. The proposed architecture is designed to be used by severely motor-impaired individuals, such as those with locked-in syndrome, while reducing the effort and cognitive load needed to communicate. Such a system requires the merging of two primary research fields: 1) electroencephalography (EEG)-based BCIs and 2) intelligent user interface design. The EEG-based BCI portion of this dissertation provides a history of the field, details of our software and hardware implementation, and results from an experimental study aimed at verifying the utility of a BCI based on the steady-state visual evoked potential (SSVEP), a robust brain response to visual stimulation at controlled frequencies. The visual stimulation, feature extraction, and classification algorithms for the BCI were specially designed to achieve successful real-time performance on a laptop computer. Also, the BCI was developed in Python, an open-source programming language that combines programming ease with effective handling of hardware and software requirements. The result of this work was The Unlock Project app software for BCI development. Using it, a four-choice SSVEP BCI setup was implemented and tested with five severely motor-impaired and fourteen control participants. The system showed a wide range of usability across participants, with classification rates ranging from 25-95%. The second portion of the dissertation discusses the viability of intelligent user interface design as a method for obtaining a more user-focused vocal output communication aid tailored to motor-impaired individuals. A proposed blueprint of this communication "app" was developed in this dissertation. It would make use of readily available laptop sensors to perform facial recognition, speech-to-text decoding, and geo-location. The ultimate goal is to couple sensor information with natural language processing to construct an intelligent user interface that shapes communication in a practical SSVEP-based BCI

    On Tackling Fundamental Constraints in Brain-Computer Interface Decoding via Deep Neural Networks

    Get PDF
    A Brain-Computer Interface (BCI) is a system that provides a communication and control medium between human cortical signals and external devices, with the primary aim to assist or to be used by patients who suffer from a neuromuscular disease. Despite significant recent progress in the area of BCI, there are numerous shortcomings associated with decoding Electroencephalography-based BCI signals in real-world environments. These include, but are not limited to, the cumbersome nature of the equipment, complications in collecting large quantities of real-world data, the rigid experimentation protocol and the challenges of accurate signal decoding, especially in making a system work in real-time. Hence, the core purpose of this work is to investigate improving the applicability and usability of BCI systems, whilst preserving signal decoding accuracy. Recent advances in Deep Neural Networks (DNN) provide the possibility for signal processing to automatically learn the best representation of a signal, contributing to improved performance even with a noisy input signal. Subsequently, this thesis focuses on the use of novel DNN-based approaches for tackling some of the key underlying constraints within the area of BCI. For example, recent technological improvements in acquisition hardware have made it possible to eliminate the pre-existing rigid experimentation procedure, albeit resulting in noisier signal capture. However, through the use of a DNN-based model, it is possible to preserve the accuracy of the predictions from the decoded signals. Moreover, this research demonstrates that by leveraging DNN-based image and signal understanding, it is feasible to facilitate real-time BCI applications in a natural environment. Additionally, the capability of DNN to generate realistic synthetic data is shown to be a potential solution in reducing the requirement for costly data collection. Work is also performed in addressing the well-known issues regarding subject bias in BCI models by generating data with reduced subject-specific features. The overall contribution of this thesis is to address the key fundamental limitations of BCI systems. This includes the unyielding traditional experimentation procedure, the mandatory extended calibration stage and sustaining accurate signal decoding in real-time. These limitations lead to a fragile BCI system that is demanding to use and only suited for deployment in a controlled laboratory. Overall contributions of this research aim to improve the robustness of BCI systems and enable new applications for use in the real-world

    An SSVEP Brain-Computer Interface: A Machine Learning Approach

    Get PDF
    A Brain-Computer Interface (BCI) provides a bidirectional communication path for a human to control an external device using brain signals. Among neurophysiological features in BCI systems, steady state visually evoked potentials (SSVEP), natural responses to visual stimulation at specific frequencies, has increasingly drawn attentions because of its high temporal resolution and minimal user training, which are two important parameters in evaluating a BCI system. The performance of a BCI can be improved by a properly selected neurophysiological signal, or by the introduction of machine learning techniques. With the help of machine learning methods, a BCI system can adapt to the user automatically. In this work, a machine learning approach is introduced to the design of an SSVEP based BCI. The following open problems have been explored: 1. Finding a waveform with high success rate of eliciting SSVEP. SSVEP belongs to the evoked potentials, which require stimulations. By comparing square wave, triangle wave and sine wave light signals and their corresponding SSVEP, it was observed that square waves with 50% duty cycle have a significantly higher success rate of eliciting SSVEPs than either sine or triangle stimuli. 2. The resolution of dual stimuli that elicits consistent SSVEP. Previous studies show that the frequency bandwidth of an SSVEP stimulus is limited. Hence it affects the performance of the whole system. A dual-stimulus, the overlay of two distinctive single frequency stimuli, can potentially expand the number of valid SSVEP stimuli. However, the improvement depends on the resolution of the dual stimuli. Our experimental results shothat 4 Hz is the minimum difference between two frequencies in a dual-stimulus that elicits consistent SSVEP. 3. Stimuli and color-space decomposition. It is known in the literature that although low-frequency stimuli (\u3c30 Hz) elicit strong SSVEP, they may cause dizziness. In this work, we explored the design of a visually friendly stimulus from the perspective of color-space decomposition. In particular, a stimulus was designed with a fixed luminance component and variations in the other two dimensions in the HSL (Hue, Saturation, Luminance) color-space. Our results shothat the change of color alone evokes SSVEP, and the embedded frequencies in stimuli affect the harmonics. Also, subjects claimed that a fixed luminance eases the feeling of dizziness caused by low frequency flashing objects. 4. Machine learning techniques have been applied to make a BCI adaptive to individuals. An SSVEP-based BCI brings new requirements to machine learning. Because of the non-stationarity of the brain signal, a classifier should adapt to the time-varying statistical characters of a single user\u27s brain wave in realtime. In this work, the potential function classifier is proposed to address this requirement, and achieves 38.2bits/min on offline EEG data

    Enhancement and optimization of a multi-command-based brain-computer interface

    Get PDF
    Brain-computer interfaces (BCI) assist disabled person to control many appliances without any physically interaction (e.g., pressing a button). SSVEP is brain activities elicited by evoked signals that are observed by visual stimuli paradigm. In this dissertation were addressed the problems which are oblige more usability of BCI-system by optimizing and enhancing the performance using particular design. Main contribution of this work is improving brain reaction response depending on focal approaches

    Objectivation of Visual Perception

    Get PDF
    Der Sehsinn ermöglicht eine detailgenaue Wahrnehmung der Welt. Virtual Reality (VR), Brain-Computer Interfaces (BCI) und Deep Learning sind neue Technologien, die uns hierbei neue Möglichkeiten für die Erforschung der visuellen Wahrnehmung geben. In dieser Dissertation wird ein System für die Augenheilkunde vorgestellt, das Augenkrankheiten in VR simulieren kann und durch Hinzufügen von BCI und KI eine objektive Diagnostik von Gesichtsfeldausfällen ermöglicht. Für ein besseres Verständnis der Arbeit wird das menschliche Sehen mit Modellen der Computer Vision verglichen und basierend hierauf ein allgemeines vierstufiges Seh-Modell eingeführt. Innerhalb des Modells werden Schnittstellen zwischen der biologisch-realen und der technologisch-virtuellen Welt evaluiert. Besteht heutzutage bei einem Patienten der Verdacht auf einen Gesichtsfeldausfall (Skotom), so werden ophthalmologische Geräte wie das Perimeter zur Ausmessung des Gesichtsfeldes eingesetzt. Das dem Stand der Technik entsprechende Verfahren liegt dem subjektiven Feedback des Patienten zugrunde. Entsprechend können Lerneffekte beim Patienten das Ergebnis nicht unwesentlich beeinflussen. Um diese Problematiken zu umgehen, wurde in dieser Dissertation ein objektives Perimetriesystem auf Basis von VR, BCI und Deep Learning erfolgreich implementiert und evaluiert. Ein weiterer Vorteil des neuen Systems ist die Möglichkeit zur Einsetzung bei Menschen mit Schwerbehinderung, Kindern und Tieren. Der Lösungsansatz dieser Dissertation ist die Simulation (pathologischer/eingeschränkter) Sehzustände. Hierfür wurde der Zustand von Glaukompatienten mit Hilfe von VR-Technologien virtuell abgebildet. Die resultierende VR-Anwendung bildet individuelle Glaukomverläufe immersiv in VR ab. Evaluiert wurde die Simulationsumgebung mit medizinischem Fachpersonal und Glaukompatienten an der Augenklinik des Universitätsklinikums Heidelberg (\textit{N}=22). Hierbei wurde gezeigt, dass VR eine geeignete Maßnahme zur Simulation von Sehbedingungen ist und zum Verständnis des Patientenzustandes einen Beitrag leisten kann. Ausgehend von dieser Simulationsumgebung wurden weitere Software- und Hardwaremodule hinzugefügt. Erzeugte stationäre visuelle Stimuli wurden hierbei eingesetzt, um (simulierte) Sehfehler durch ein Elektroenzephalographie (EEG)-basiertes BCI zu erkennen. Das System wurde in einer internationalen Laborstudie (\textit{N}=15) in Zusammenarbeit mit dem Massachusetts Institute of Technology getestet und validiert. Die gesammelten Daten deuten darauf hin, dass das System für die Klassifizierung des zentralen (88\% Genauigkeit pro 2,5 Sekunden EEG-Daten) und peripheren Gesichtsfeldes (63-81\% Genauigkeit) geeignet ist, während es für periphere Positionen aufgrund der Technologiesensitivität zu Einschränkungen (50-57\% Genauigkeit) kommt. Entsprechend sollte das System für Skotome eingesetzt werden, sofern der Sehausfall das zentrale Sehen oder ganze Quadranten des Gesichtsfelds betrifft. Aufgrund der Notwendigkeit für einen besseren ambulanten EEG-Messaufbau werden modulare, plattformübergreifende Softwareimplementierungen und neuartige, zum Patent angemeldete, EEG-Elektroden vorgestellt. Die neuartigen Elektroden bieten ein besseres Signal-zu-Rausch-Verhältnis als herkömmliche Trockenelektroden (\SI{1,35}{dB} Verbesserung), sind schnell anzulegen, wiederverwendbar und hinterlassen kaum bis keine unerwünschten Rückstände im Haar des Patienten. Diese Dissertation legt den Grundstein für ein VR, BCI und KI-basiertes Perimetrie-Messsystem, welches insbesondere im ambulanten Setting oder bei Patienten mit Einschränkungen zum Einsatz kommen könnte
    corecore