4 research outputs found

    Chromatic and High-frequency cVEP-based BCI Paradigm

    Full text link
    We present results of an approach to a code-modulated visual evoked potential (cVEP) based brain-computer interface (BCI) paradigm using four high-frequency flashing stimuli. To generate higher frequency stimulation compared to the state-of-the-art cVEP-based BCIs, we propose to use the light-emitting diodes (LEDs) driven from a small micro-controller board hardware generator designed by our team. The high-frequency and green-blue chromatic flashing stimuli are used in the study in order to minimize a danger of a photosensitive epilepsy (PSE). We compare the the green-blue chromatic cVEP-based BCI accuracies with the conventional white-black flicker based interface.Comment: 4 pages, 4 figures, accepted for EMBC 2015, IEEE copyrigh

    Towards a home-use BCI: fast asynchronous control and robust non-control state detection

    Get PDF
    Eine Hirn-Computer Schnittstelle (engl. Brain-Computer Interface, BCI) erlaubt einem Nutzer einen Computer nur mittels Gehirn-Aktivität zu steuern. Der Hauptanwendungszweck ist die Wiederherstellung verschiedener Funktionen von motorisch eingeschränkten Menschen, zum Beispiel, die Wiederherstellung der Kommunikationsfähigkeit. Bisherige BCIs die auf visuell evozierten Potentialen (VEPs) basieren, erlauben bereits hohe Kommunikationsgeschwindigkeiten. VEPs sind Reaktionen, die im Gehirn durch visuelle Stimulation hervorgerufen werden. Allerdings werden bisherige BCIs hauptsächlich in der Forschung verwendet und sind nicht für reale Anwendungszwecke geeignet. Grund dafür ist, dass sie auf dem synchronen Steuerungsprinzip beruhen, dies bedeutet, dass Aktionen nur in vorgegebenen Zeitslots ausgeführt werden können. Dies bedeutet wiederum, dass der Nutzer keine Aktionen nach seinem Belieben ausführen kann, was für reale Anwendungszwecke ein Problem darstellt. Um dieses Problem zu lösen, müssen BCIs die Intention des Nutzers, das System zu steuern oder nicht, erkennen. Solche BCIs werden asynchron oder selbstbestimmt genannt. Bisherige asynchrone BCIs zeigen allerdings keine ausreichende Genauigkeit bei der Erkennung der Intention und haben zudem eine deutlich reduzierte Kommunikationsgeschwindigkeit im Vergleich zu synchronen BCIs. In dieser Doktorarbeit wird das erste asynchrone BCI vorgestellt, welches sowohl eine annäherungsweise perfekte Erkennung der Intention des Nutzers als auch eine ähnliche Kommunikationsgeschwindigkeit wie synchrone BCIs erzielt. Dies wurde durch die Entwicklung eines allgemeinen Modells für die Vorhersage von sensorischen Reizen erzielt. Dadurch können beliebige visuelle Stimulationsmuster basierend auf den gemessenen VEPs vorhergesagt werden. Das Modell wurde sowohl mit einem "traditionellen" maschinellen Lernverfahren als auch mit einer deep-learning Methode implementiert und evaluiert. Das resultierende asynchrone BCI übertrifft bisherige Methoden in mehreren Disziplinen um ein Vielfaches und ist ein wesentlicher Schritt, um BCI-Anwendungen aus dem Labor in die Praxis zu bringen. Durch weitere Optimierungen, die in dieser Arbeit diskutiert werden, könnte es sich zum allerersten geeigneten BCI für Endanwender entwickeln, da es effektiv (hohe Genauigkeit), effizient (schnelle Klassifizierungen), und einfach zu bedienen ist. Ein weiteres Alleinstellungsmerkmal ist, dass das entwickelte BCI für beliebige Szenarien verwendet werden kann, da es annähernd unendlich viele gleichzeitige Aktionsfelder erlaubt.Brain-Computer Interfaces (BCIs) enable users to control a computer by using pure brain activity. Their main purpose is to restore several functionalities of motor disabled people, for example, to restore the communication ability. Recent BCIs based on visual evoked potentials (VEPs), which are brain responses to visual stimuli, have shown to achieve high-speed communication. However, BCIs have not really found their way out of the lab yet. This is mainly because all recent high-speed BCIs are based on synchronous control, which means commands can only be executed in time slots controlled by the BCI. Therefore, the user is not able to select a command at his own convenience, which poses a problem in real-world applications. Furthermore, all those BCIs are based on stimulation paradigms which restrict the number of possible commands. To be suitable for real-world applications, a BCI should be asynchronous, or also called self-paced, and must be able to identify the user’s intent to control the system or not. Although there some asynchronous BCI approaches, none of them achieved suitable real-world performances. In this thesis, the first asynchronous high-speed BCI is proposed, which allows using a virtually unlimited number of commands. Furthermore, it achieved a nearly perfect distinction between intentional control (IC) and non-control (NC), which means commands are only executed if the user intends to. This was achieved by a completely different approach, compared to recent methods. Instead of using a classifier trained on specific stimulation patterns, the presented approach is based on a general model that predicts arbitrary stimulation patterns. The approach was evaluated with a "traditional" as well as a deep machine learning method. The resultant asynchronous BCI outperforms recent methods by a multi-fold in multiple disciplines and is an essential step for moving BCI applications out of the lab and into real life. With further optimization, discussed in this thesis, it could evolve to the very first end-user suitable BCI, as it is effective (high accuracy), efficient (fast classifications), ease of use, and allows to perform as many different tasks as desired

    Adaptive parameter setting in a code modulated visual evoked potentials BCI

    Get PDF
    International audienceCode-modulated visual evoked potentials (c-VEPs) BCI are designed for high-speed communication. The setting of stimulus parameters is fundamental for this type of BCI, because stimulus parameters have an influence on the performance of the system. In this work we design a c-VEP BCI for word spelling, in which it is possible to find the optimal stimulus presentation rate per each subject thanks to an adaptive setting parameter phase. This phase takes place at the beginning of each session and allows to define the stimulus parameters that are used during the spelling phase. The different stimuli are modulated by a binary m-sequence circular-shifted by a different time lag and a template matching method is applied for the target detection. We acquired data from 4 subjects in two sessions. The results obtained for the offline spelling show the variability between subjects and therefore the importance of subject-dependent adaptation of c-VEP BCI

    Development of a Practical Visual-Evoked Potential-Based Brain-Computer Interface

    Get PDF
    There are many different neuromuscular disorders that disrupt the normal communication pathways between the brain and the rest of the body. These diseases often leave patients in a `locked-in state, rendering them unable to communicate with their environment despite having cognitively normal brain function. Brain-computer interfaces (BCIs) are augmentative communication devices that establish a direct link between the brain and a computer. Visual evoked potential (VEP)- based BCIs, which are dependent upon the use of salient visual stimuli, are amongst the fastest BCIs available and provide the highest communication rates compared to other BCI modalities. However. the majority of research focuses solely on improving the raw BCI performance; thus, most visual BCIs still suffer from a myriad of practical issues that make them impractical for everyday use. The focus of this dissertation is on the development of novel advancements and solutions that increase the practicality of VEP-based BCIs. The presented work shows the results of several studies that relate to characterizing and optimizing visual stimuli. improving ergonomic design. reducing visual irritation, and implementing a practical VEP-based BCI using an extensible software framework and mobile devices platforms
    corecore