111 research outputs found

    Development of a Practical Visual-Evoked Potential-Based Brain-Computer Interface

    Get PDF
    There are many different neuromuscular disorders that disrupt the normal communication pathways between the brain and the rest of the body. These diseases often leave patients in a `locked-in state, rendering them unable to communicate with their environment despite having cognitively normal brain function. Brain-computer interfaces (BCIs) are augmentative communication devices that establish a direct link between the brain and a computer. Visual evoked potential (VEP)- based BCIs, which are dependent upon the use of salient visual stimuli, are amongst the fastest BCIs available and provide the highest communication rates compared to other BCI modalities. However. the majority of research focuses solely on improving the raw BCI performance; thus, most visual BCIs still suffer from a myriad of practical issues that make them impractical for everyday use. The focus of this dissertation is on the development of novel advancements and solutions that increase the practicality of VEP-based BCIs. The presented work shows the results of several studies that relate to characterizing and optimizing visual stimuli. improving ergonomic design. reducing visual irritation, and implementing a practical VEP-based BCI using an extensible software framework and mobile devices platforms

    SSVEP-Based BCIs

    Get PDF
    This chapter describes the method of flickering targets, eliciting fundamental frequency changes in the EEG signal of the subject, used to drive machine commands after interpretation of user’s intentions. The steady-state response of the changes in the EEG caused by events such as visual stimulus applied to the subject via a computer screen is called steady-state visually evoked potential (SSVEP). This feature of the EEG signal can be used to form a basis of input to assistive devices for locked in patients to improve their quality of life, as well as for performance enhancing devices for healthy subjects. The contents of this chapter describe the SSVEP stimuli; feature extraction techniques, feature classification techniques and a few applications based on SSVEP based BCI

    Towards a home-use BCI: fast asynchronous control and robust non-control state detection

    Get PDF
    Eine Hirn-Computer Schnittstelle (engl. Brain-Computer Interface, BCI) erlaubt einem Nutzer einen Computer nur mittels Gehirn-Aktivität zu steuern. Der Hauptanwendungszweck ist die Wiederherstellung verschiedener Funktionen von motorisch eingeschränkten Menschen, zum Beispiel, die Wiederherstellung der Kommunikationsfähigkeit. Bisherige BCIs die auf visuell evozierten Potentialen (VEPs) basieren, erlauben bereits hohe Kommunikationsgeschwindigkeiten. VEPs sind Reaktionen, die im Gehirn durch visuelle Stimulation hervorgerufen werden. Allerdings werden bisherige BCIs hauptsächlich in der Forschung verwendet und sind nicht für reale Anwendungszwecke geeignet. Grund dafür ist, dass sie auf dem synchronen Steuerungsprinzip beruhen, dies bedeutet, dass Aktionen nur in vorgegebenen Zeitslots ausgeführt werden können. Dies bedeutet wiederum, dass der Nutzer keine Aktionen nach seinem Belieben ausführen kann, was für reale Anwendungszwecke ein Problem darstellt. Um dieses Problem zu lösen, müssen BCIs die Intention des Nutzers, das System zu steuern oder nicht, erkennen. Solche BCIs werden asynchron oder selbstbestimmt genannt. Bisherige asynchrone BCIs zeigen allerdings keine ausreichende Genauigkeit bei der Erkennung der Intention und haben zudem eine deutlich reduzierte Kommunikationsgeschwindigkeit im Vergleich zu synchronen BCIs. In dieser Doktorarbeit wird das erste asynchrone BCI vorgestellt, welches sowohl eine annäherungsweise perfekte Erkennung der Intention des Nutzers als auch eine ähnliche Kommunikationsgeschwindigkeit wie synchrone BCIs erzielt. Dies wurde durch die Entwicklung eines allgemeinen Modells für die Vorhersage von sensorischen Reizen erzielt. Dadurch können beliebige visuelle Stimulationsmuster basierend auf den gemessenen VEPs vorhergesagt werden. Das Modell wurde sowohl mit einem "traditionellen" maschinellen Lernverfahren als auch mit einer deep-learning Methode implementiert und evaluiert. Das resultierende asynchrone BCI übertrifft bisherige Methoden in mehreren Disziplinen um ein Vielfaches und ist ein wesentlicher Schritt, um BCI-Anwendungen aus dem Labor in die Praxis zu bringen. Durch weitere Optimierungen, die in dieser Arbeit diskutiert werden, könnte es sich zum allerersten geeigneten BCI für Endanwender entwickeln, da es effektiv (hohe Genauigkeit), effizient (schnelle Klassifizierungen), und einfach zu bedienen ist. Ein weiteres Alleinstellungsmerkmal ist, dass das entwickelte BCI für beliebige Szenarien verwendet werden kann, da es annähernd unendlich viele gleichzeitige Aktionsfelder erlaubt.Brain-Computer Interfaces (BCIs) enable users to control a computer by using pure brain activity. Their main purpose is to restore several functionalities of motor disabled people, for example, to restore the communication ability. Recent BCIs based on visual evoked potentials (VEPs), which are brain responses to visual stimuli, have shown to achieve high-speed communication. However, BCIs have not really found their way out of the lab yet. This is mainly because all recent high-speed BCIs are based on synchronous control, which means commands can only be executed in time slots controlled by the BCI. Therefore, the user is not able to select a command at his own convenience, which poses a problem in real-world applications. Furthermore, all those BCIs are based on stimulation paradigms which restrict the number of possible commands. To be suitable for real-world applications, a BCI should be asynchronous, or also called self-paced, and must be able to identify the user’s intent to control the system or not. Although there some asynchronous BCI approaches, none of them achieved suitable real-world performances. In this thesis, the first asynchronous high-speed BCI is proposed, which allows using a virtually unlimited number of commands. Furthermore, it achieved a nearly perfect distinction between intentional control (IC) and non-control (NC), which means commands are only executed if the user intends to. This was achieved by a completely different approach, compared to recent methods. Instead of using a classifier trained on specific stimulation patterns, the presented approach is based on a general model that predicts arbitrary stimulation patterns. The approach was evaluated with a "traditional" as well as a deep machine learning method. The resultant asynchronous BCI outperforms recent methods by a multi-fold in multiple disciplines and is an essential step for moving BCI applications out of the lab and into real life. With further optimization, discussed in this thesis, it could evolve to the very first end-user suitable BCI, as it is effective (high accuracy), efficient (fast classifications), ease of use, and allows to perform as many different tasks as desired

    Sensing the world through predictions and errors

    Get PDF

    Language Model Applications to Spelling with Brain-Computer Interfaces

    Get PDF
    Within the Ambient Assisted Living (AAL) community, Brain-Computer Interfaces (BCIs) have raised great hopes as they provide alternative communication means for persons with disabilities bypassing the need for speech and other motor activities. Although significant advancements have been realized in the last decade, applications of language models (e.g., word prediction, completion) have only recently started to appear in BCI systems. The main goal of this article is to review the language model applications that supplement non-invasive BCI-based communication systems by discussing their potential and limitations, and to discern future trends. First, a brief overview of the most prominent BCI spelling systems is given, followed by an in-depth discussion of the language models appli

    Development of a practical and mobile brain-computer communication device for profoundly paralyzed individuals

    Full text link
    Thesis (Ph.D.)--Boston UniversityBrain-computer interface (BCI) technology has seen tremendous growth over the past several decades, with numerous groundbreaking research studies demonstrating technical viability (Sellers et al., 2010; Silvoni et al., 2011). Despite this progress, BCIs have remained primarily in controlled laboratory settings. This dissertation proffers a blueprint for translating research-grade BCI systems into real-world applications that are noninvasive and fully portable, and that employ intelligent user interfaces for communication. The proposed architecture is designed to be used by severely motor-impaired individuals, such as those with locked-in syndrome, while reducing the effort and cognitive load needed to communicate. Such a system requires the merging of two primary research fields: 1) electroencephalography (EEG)-based BCIs and 2) intelligent user interface design. The EEG-based BCI portion of this dissertation provides a history of the field, details of our software and hardware implementation, and results from an experimental study aimed at verifying the utility of a BCI based on the steady-state visual evoked potential (SSVEP), a robust brain response to visual stimulation at controlled frequencies. The visual stimulation, feature extraction, and classification algorithms for the BCI were specially designed to achieve successful real-time performance on a laptop computer. Also, the BCI was developed in Python, an open-source programming language that combines programming ease with effective handling of hardware and software requirements. The result of this work was The Unlock Project app software for BCI development. Using it, a four-choice SSVEP BCI setup was implemented and tested with five severely motor-impaired and fourteen control participants. The system showed a wide range of usability across participants, with classification rates ranging from 25-95%. The second portion of the dissertation discusses the viability of intelligent user interface design as a method for obtaining a more user-focused vocal output communication aid tailored to motor-impaired individuals. A proposed blueprint of this communication "app" was developed in this dissertation. It would make use of readily available laptop sensors to perform facial recognition, speech-to-text decoding, and geo-location. The ultimate goal is to couple sensor information with natural language processing to construct an intelligent user interface that shapes communication in a practical SSVEP-based BCI

    Towards improved visual stimulus discrimination in an SSVEP BCI

    Get PDF
    The dissertation investigated the influence of stimulus characteristics, electroencephalographic (EEG) electrode location and three signal processing methods on the spectral signal to noise ratio (SNR) of Steady State Visual Evoked Potentials (SSVEPs) with a view for use in Brain-Computer Interfaces (BCIs). It was hypothesised that the new spectral baseline processing method introduced here, termed the 'activity baseline', would result in an improved SNR

    Robotic Vehicle Control Using Brain Computer Interface

    Get PDF
    Brain Computer Interface (BCI) is an interfacing device that interacts with external device or computer. The principle of Brain Computer interface is based on Electroencephalography. Under the influence of external stimuli, human brain generates some responses on distinctive areas of the brain. These responses appear in the EEG signals captured from corresponding electrode positions on the scalp of the human subject. The responses appear in the EEG signal as a feature in the time domain, or a feature in the frequency domain depending upon the periodic nature of the stimuli. These features are then detected classified and then control signal is generated by an external device. This enables the subject to directly control an external device from the brain using the signals generated in response to stimulation. Brain-computer Interfcae has shown promising application to aid patients with Locked-in syndrome, Spinal Cord Injury (SCI), Acute Inflammatory Demyelinating Polyradiculoneuropathy (AIDP), Lock-down Syndrome and Amyotrophic Lateral Sclerosis (ALS). Until now, these patients needed human assistance to communicate. Brain-computer interface also has promising application for wheelchair control. Where these patients would be able to control electric wheelchairs using Brain Computer interface. In this work, a working model of Brain Computer Interface has been developed using PowerLab 16/35 and ML-138 bio-amplifier. The BCI is based on Steady State Visually Evoked Potential (SSVEP). SSVEP response is generated from the visual cortex of the subject when the subject is exposed to a flickering light source. A model robotic platform has also been controlled using the detected SSVEP Signal

    Exploiting code-modulating, Visually-Evoked Potentials for fast and flexible control via Brain-Computer Interfaces

    Get PDF
    Riechmann H. Exploiting code-modulating, Visually-Evoked Potentials for fast and flexible control via Brain-Computer Interfaces. Bielefeld: Universität Bielefeld; 2014
    corecore