12 research outputs found
Using Variable Natural Environment Brain-Computer Interface Stimuli for Real-time Humanoid Robot Navigation
This paper addresses the challenge of humanoid robot teleoperation in a
natural indoor environment via a Brain-Computer Interface (BCI). We leverage
deep Convolutional Neural Network (CNN) based image and signal understanding to
facilitate both real-time bject detection and dry-Electroencephalography (EEG)
based human cortical brain bio-signals decoding. We employ recent advances in
dry-EEG technology to stream and collect the cortical waveforms from subjects
while they fixate on variable Steady State Visual Evoked Potential (SSVEP)
stimuli generated directly from the environment the robot is navigating. To
these ends, we propose the use of novel variable BCI stimuli by utilising the
real-time video streamed via the on-board robot camera as visual input for
SSVEP, where the CNN detected natural scene objects are altered and flickered
with differing frequencies (10Hz, 12Hz and 15Hz). These stimuli are not akin to
traditional stimuli - as both the dimensions of the flicker regions and their
on-screen position changes depending on the scene objects detected. On-screen
object selection via such a dry-EEG enabled SSVEP methodology, facilitates the
on-line decoding of human cortical brain signals, via a specialised secondary
CNN, directly into teleoperation robot commands (approach object, move in a
specific direction: right, left or back). This SSVEP decoding model is trained
via a priori offline experimental data in which very similar visual input is
present for all subjects. The resulting classification demonstrates high
performance with mean accuracy of 85% for the real-time robot navigation
experiment across multiple test subjects.Comment: Accepted as a full paper at the 2019 International Conference on
Robotics and Automation (ICRA
Data Analytics in Steady-State Visual Evoked Potential-based Brain-Computer Interface: A Review
Electroencephalograph (EEG) has been widely applied for brain-computer interface (BCI) which enables paralyzed people to directly communicate with and control of external devices, due to its portability, high temporal resolution, ease of use and low cost. Of various EEG paradigms, steady-state visual evoked potential (SSVEP)-based BCI system which uses multiple visual stimuli (such as LEDs or boxes on a computer screen) flickering at different frequencies has been widely explored in the past decades due to its fast communication rate and high signal-to-noise ratio. In this paper, we review the current research in SSVEP-based BCI, focusing on the data analytics that enables continuous, accurate detection of SSVEPs and thus high information transfer rate. The main technical challenges, including signal pre-processing, spectrum analysis, signal decomposition, spatial filtering in particular canonical correlation analysis and its variations, and classification techniques are described in this paper. Research challenges and opportunities in spontaneous brain activities, mental fatigue, transfer learning as well as hybrid BCI are also discussed
Leveraging EEG-based speech imagery brain-computer interfaces
Speech Imagery Brain-Computer Interfaces (BCIs) provide an intuitive and flexible way of interaction via brain activity recorded during imagined speech. Imagined speech can be decoded in form of syllables or words and captured even with non-invasive measurement methods as for example the Electroencephalography (EEG). Over the last decade, research in this field has made tremendous progress and prototypical implementations of EEG-based Speech Imagery BCIs are numerous. However, most work is still conducted in controlled laboratory environments with offline classification and does not find its way to real online scenarios. Within this thesis we identify three main reasons for these circumstances, namely, the mentally and physically exhausting training procedures, insufficient classification accuracies and cumbersome EEG setups with usually high-resolution headsets. We furthermore elaborate on possible solutions to overcome the aforementioned problems and present and evaluate new methods in each of the domains. In detail we introduce two new training concepts for imagined speech BCIs, one based on EEG activity during silently reading and the other recorded during overtly speaking certain words. Insufficient classification accuracies are addressed by introducing the concept of a Semantic Speech Imagery BCI, which classifies the semantic category of an imagined word prior to the word itself to increase the performance of the system. Finally, we investigate on different techniques for electrode reduction in Speech Imagery BCIs and aim at finding a suitable subset of electrodes for EEG-based imagined speech detection, therefore facilitating the cumbersome setups. All of our presented results together with general remarks on experiences and best practice for study setups concerning imagined speech are summarized and supposed to act as guidelines for further research in the field, thereby leveraging Speech Imagery BCIs towards real-world application.Speech Imagery Brain-Computer Interfaces (BCIs) bieten eine intuitive und flexible Möglichkeit der Interaktion mittels GehirnaktivitĂ€t, aufgezeichnet wĂ€hrend der bloĂen Vorstellung von Sprache. Vorgestellte Sprache kann in Form von Silben oder Wörtern auch mit nicht-invasiven Messmethoden wie der Elektroenzephalographie (EEG) gemessen und entschlĂŒsselt werden. In den letzten zehn Jahren hat die Forschung auf diesem Gebiet enorme Fortschritte gemacht, und es gibt zahlreiche prototypische Implementierungen von EEG-basierten Speech Imagery BCIs. Die meisten Arbeiten werden jedoch immer noch in kontrollierten Laborumgebungen mit Offline-Klassifizierung durchgefĂŒhrt und finden nicht denWeg in reale Online-Szenarien. In dieser Arbeit identifizieren wir drei HauptgrĂŒnde fĂŒr diesen Umstand, nĂ€mlich die geistig und körperlich anstrengenden Trainingsverfahren, unzureichende Klassifizierungsgenauigkeiten und umstĂ€ndliche EEG-Setups mit meist hochauflösenden Headsets. DarĂŒber hinaus erarbeiten wir mögliche Lösungen zur Ăberwindung der oben genannten Probleme und prĂ€sentieren und evaluieren neue Methoden fĂŒr jeden dieser Bereiche. Im Einzelnen stellen wir zwei neue Trainingskonzepte fĂŒr Speech Imagery BCIs vor, von denen eines auf der Messung von EEG-AktivitĂ€t wĂ€hrend des stillen Lesens und das andere auf der AktivitĂ€t wĂ€hrend des Aussprechens bestimmter Wörter basiert. Unzureichende Klassifizierungsgenauigkeiten werden durch die EinfĂŒhrung des Konzepts eines Semantic Speech Imagery BCI angegangen, das die semantische Kategorie eines vorgestellten Wortes vor dem Wort selbst klassifiziert, um die Performance des Systems zu erhöhen. SchlieĂlich untersuchen wir verschiedene Techniken zur Elektrodenreduktion bei Speech Imagery BCIs und zielen darauf ab, eine geeignete Teilmenge von Elektroden fĂŒr die EEG-basierte Erkennung von vorgestellter Sprache zu finden, um so die umstĂ€ndlichen Setups zu erleichtern. Alle unsere Ergebnisse werden zusammen mit allgemeinen Bemerkungen zu Erfahrungen und Best Practices fĂŒr Studien-Setups bezĂŒglich vorgestellter Sprache zusammengefasst und sollen als Richtlinien fĂŒr die weitere Forschung auf diesem Gebiet dienen, um so Speech Imagery BCIs fĂŒr die Anwendung in der realenWelt zu optimieren
On Tackling Fundamental Constraints in Brain-Computer Interface Decoding via Deep Neural Networks
A Brain-Computer Interface (BCI) is a system that provides a communication and control medium between human cortical signals and external devices, with the primary aim to assist or to be used by patients who suffer from a neuromuscular disease. Despite significant recent progress in the area of BCI, there are numerous shortcomings associated with decoding Electroencephalography-based BCI signals in real-world environments. These include, but are not limited to, the cumbersome nature of the equipment, complications in collecting large quantities of real-world data, the rigid experimentation protocol and the challenges of accurate signal decoding, especially in making a system work in real-time. Hence, the core purpose of this work is to investigate improving the applicability and usability of BCI systems, whilst preserving signal decoding accuracy.
Recent advances in Deep Neural Networks (DNN) provide the possibility for signal processing to automatically learn the best representation of a signal, contributing to improved performance even with a noisy input signal. Subsequently, this thesis focuses on the use of novel DNN-based approaches for tackling some of the key underlying constraints within the area of BCI. For example, recent technological improvements in acquisition hardware have made it possible to eliminate the pre-existing rigid experimentation procedure, albeit resulting in noisier signal capture. However, through the use of a DNN-based model, it is possible to preserve the accuracy of the predictions from the decoded signals. Moreover, this research demonstrates that by leveraging DNN-based image and signal understanding, it is feasible to facilitate real-time BCI applications in a natural environment. Additionally, the capability of DNN to generate realistic synthetic data is shown to be a potential solution in reducing the requirement for costly data collection. Work is also performed in addressing the well-known issues regarding subject bias in BCI models by generating data with reduced subject-specific features.
The overall contribution of this thesis is to address the key fundamental limitations of BCI systems. This includes the unyielding traditional experimentation procedure, the mandatory extended calibration stage and sustaining accurate signal decoding in real-time. These limitations lead to a fragile BCI system that is demanding to use and only suited for deployment in a controlled laboratory. Overall contributions of this research aim to improve the robustness of BCI systems and enable new applications for use in the real-world
Recommended from our members
Improved accuracy for subject-dependent and subject-independent deep learning-based SSVEP BCI classification: a user-friendly approach
In brain-computer interfacing, the SSVEP (steady-state visual evoked potential) method serves to foster collaboration between humans and robots. SSVEP-based detection methods require complex multichannel data acquisition, making them difficult to deploy due to discomfort during extended use and the complexity of the algorithms involved. On the other hand, single-channel setup offers simplicity and ease of use. However, in a single channel, achieving encouraging performance in the SD (subject-dependent) scenario is challenging, and accuracy drops further in the SI (subject-independent) scenario. This requires the development of a generalized approach to improve performance in both scenarios. This study proposes (VMD-DNN) to detect SSVEP in single-channel setups for SD and SI scenarios. The novelty of the proposed method lies in utilizing VMD (Variational Mode Decomposition) as a preprocessor, leveraging harmonic information and Kurtosis of the cross-correlation function to select harmonics from VMD decomposed signal. The preprocessed reconstructed signal uses complex spectrum features as input to the DNN for classification. The results show an average accuracy of 93%, 95.3% in SD and 79%, 92.33% in SI scenarios tested on two publicly available datasets, respectively. The ITR (Information transfer rate) was 67.50 bit/min, 92.31 bit/min for SD, and 46.13 bit/min, 85.94 bit/min for SI for both datasets, respectively. In SD, accuracy is improved by 3.34% and 5%, and ITR by 8.87% and 12.91% over baseline methods for both datasets respectively. The proposed VMD-DNN model is effective, with improved performance and lower computational complexity. The robust single-channel approach makes it user-friendly for human-robot collaboration
Development of a practical and mobile brain-computer communication device for profoundly paralyzed individuals
Thesis (Ph.D.)--Boston UniversityBrain-computer interface (BCI) technology has seen tremendous growth over the past several decades, with numerous groundbreaking research studies demonstrating technical viability (Sellers et al., 2010; Silvoni et al., 2011). Despite this progress, BCIs have remained primarily in controlled laboratory settings. This dissertation proffers a blueprint for translating research-grade BCI systems into real-world applications that are noninvasive and fully portable, and that employ intelligent user interfaces for communication. The proposed architecture is designed to be used by severely motor-impaired individuals, such as those with locked-in syndrome, while reducing the effort and cognitive load needed to communicate. Such a system requires the merging of two primary research fields: 1) electroencephalography (EEG)-based BCIs and 2) intelligent user interface design.
The EEG-based BCI portion of this dissertation provides a history of the field, details of our software and hardware implementation, and results from an experimental study aimed at verifying the utility of a BCI based on the steady-state visual evoked potential (SSVEP), a robust brain response to visual stimulation at controlled frequencies. The visual stimulation, feature extraction, and classification algorithms for the BCI were specially designed to achieve successful real-time performance on a laptop computer. Also, the BCI was developed in Python, an open-source programming language that combines programming ease with effective handling of hardware and software requirements. The result of this work was The Unlock Project app software for BCI development. Using it, a four-choice SSVEP BCI setup was implemented and tested with five severely motor-impaired and fourteen control participants. The system showed a wide range of usability across participants, with classification rates ranging from 25-95%.
The second portion of the dissertation discusses the viability of intelligent user interface design as a method for obtaining a more user-focused vocal output communication aid tailored to motor-impaired individuals. A proposed blueprint of this communication "app" was developed in this dissertation. It would make use of readily available laptop sensors to perform facial recognition, speech-to-text decoding, and geo-location. The ultimate goal is to couple sensor information with natural language processing to construct an intelligent user interface that shapes communication in a practical SSVEP-based BCI
Practical real-time MEG-based neural interfacing with optically pumped magnetometers
BackgroundBrain-computer interfaces decode intentions directly from the human brain with the aim to restore lost functionality, control external devices or augment daily experiences. To combine optimal performance with wide applicability, high-quality brain signals should be captured non-invasively. Magnetoencephalography (MEG) is a potent candidate but currently requires costly and confining recording hardware. The recently developed optically pumped magnetometers (OPMs) promise to overcome this limitation, but are currently untested in the context of neural interfacing.ResultsIn this work, we show that OPM-MEG allows robust single-trial analysis which we exploited in a real-time âmind-spellingâ application yielding an average accuracy of 97.7%.ConclusionsThis shows that OPM-MEG can be used to exploit neuro-magnetic brain responses in a practical and flexible manner, and opens up new avenues for a wide range of new neural interface applications in the future