16 research outputs found

    Hybrid Brain-Computer Interface Systems: Approaches, Features, and Trends

    Get PDF
    Brain-computer interface (BCI) is an emerging field, and an increasing number of BCI research projects are being carried globally to interface computer with human using EEG for useful operations in both healthy and locked persons. Although several methods have been used to enhance the BCI performance in terms of signal processing, noise reduction, accuracy, information transfer rate, and user acceptability, the effective BCI system is still in the verge of development. So far, various modifications on single BCI systems as well as hybrid are done and the hybrid BCIs have shown increased but insufficient performance. Therefore, more efficient hybrid BCI models are still under the investigation by different research groups. In this review chapter, single BCI systems are briefly discussed and more detail discussions on hybrid BCIs, their modifications, operations, and performances with comparisons in terms of signal processing approaches, applications, limitations, and future scopes are presented

    Development of a practical and mobile brain-computer communication device for profoundly paralyzed individuals

    Full text link
    Thesis (Ph.D.)--Boston UniversityBrain-computer interface (BCI) technology has seen tremendous growth over the past several decades, with numerous groundbreaking research studies demonstrating technical viability (Sellers et al., 2010; Silvoni et al., 2011). Despite this progress, BCIs have remained primarily in controlled laboratory settings. This dissertation proffers a blueprint for translating research-grade BCI systems into real-world applications that are noninvasive and fully portable, and that employ intelligent user interfaces for communication. The proposed architecture is designed to be used by severely motor-impaired individuals, such as those with locked-in syndrome, while reducing the effort and cognitive load needed to communicate. Such a system requires the merging of two primary research fields: 1) electroencephalography (EEG)-based BCIs and 2) intelligent user interface design. The EEG-based BCI portion of this dissertation provides a history of the field, details of our software and hardware implementation, and results from an experimental study aimed at verifying the utility of a BCI based on the steady-state visual evoked potential (SSVEP), a robust brain response to visual stimulation at controlled frequencies. The visual stimulation, feature extraction, and classification algorithms for the BCI were specially designed to achieve successful real-time performance on a laptop computer. Also, the BCI was developed in Python, an open-source programming language that combines programming ease with effective handling of hardware and software requirements. The result of this work was The Unlock Project app software for BCI development. Using it, a four-choice SSVEP BCI setup was implemented and tested with five severely motor-impaired and fourteen control participants. The system showed a wide range of usability across participants, with classification rates ranging from 25-95%. The second portion of the dissertation discusses the viability of intelligent user interface design as a method for obtaining a more user-focused vocal output communication aid tailored to motor-impaired individuals. A proposed blueprint of this communication "app" was developed in this dissertation. It would make use of readily available laptop sensors to perform facial recognition, speech-to-text decoding, and geo-location. The ultimate goal is to couple sensor information with natural language processing to construct an intelligent user interface that shapes communication in a practical SSVEP-based BCI

    Performance assessment in brain-computer interface-based augmentative and alternative communication

    Full text link
    Abstract A large number of incommensurable metrics are currently used to report the performance of brain-computer interfaces (BCI) used for augmentative and alterative communication (AAC). The lack of standard metrics precludes the comparison of different BCI-based AAC systems, hindering rapid growth and development of this technology. This paper presents a review of the metrics that have been used to report performance of BCIs used for AAC from January 2005 to January 2012. We distinguish between Level 1 metrics used to report performance at the output of the BCI Control Module, which translates brain signals into logical control output, and Level 2 metrics at the Selection Enhancement Module, which translates logical control to semantic control. We recommend that: (1) the commensurate metrics Mutual Information or Information Transfer Rate (ITR) be used to report Level 1 BCI performance, as these metrics represent information throughput, which is of interest in BCIs for AAC; 2) the BCI-Utility metric be used to report Level 2 BCI performance, as it is capable of handling all current methods of improving BCI performance; (3) these metrics should be supplemented by information specific to each unique BCI configuration; and (4) studies involving Selection Enhancement Modules should report performance at both Level 1 and Level 2 in the BCI system. Following these recommendations will enable efficient comparison between both BCI Control and Selection Enhancement Modules, accelerating research and development of BCI-based AAC systems.http://deepblue.lib.umich.edu/bitstream/2027.42/115465/1/12938_2012_Article_658.pd

    On Tackling Fundamental Constraints in Brain-Computer Interface Decoding via Deep Neural Networks

    Get PDF
    A Brain-Computer Interface (BCI) is a system that provides a communication and control medium between human cortical signals and external devices, with the primary aim to assist or to be used by patients who suffer from a neuromuscular disease. Despite significant recent progress in the area of BCI, there are numerous shortcomings associated with decoding Electroencephalography-based BCI signals in real-world environments. These include, but are not limited to, the cumbersome nature of the equipment, complications in collecting large quantities of real-world data, the rigid experimentation protocol and the challenges of accurate signal decoding, especially in making a system work in real-time. Hence, the core purpose of this work is to investigate improving the applicability and usability of BCI systems, whilst preserving signal decoding accuracy. Recent advances in Deep Neural Networks (DNN) provide the possibility for signal processing to automatically learn the best representation of a signal, contributing to improved performance even with a noisy input signal. Subsequently, this thesis focuses on the use of novel DNN-based approaches for tackling some of the key underlying constraints within the area of BCI. For example, recent technological improvements in acquisition hardware have made it possible to eliminate the pre-existing rigid experimentation procedure, albeit resulting in noisier signal capture. However, through the use of a DNN-based model, it is possible to preserve the accuracy of the predictions from the decoded signals. Moreover, this research demonstrates that by leveraging DNN-based image and signal understanding, it is feasible to facilitate real-time BCI applications in a natural environment. Additionally, the capability of DNN to generate realistic synthetic data is shown to be a potential solution in reducing the requirement for costly data collection. Work is also performed in addressing the well-known issues regarding subject bias in BCI models by generating data with reduced subject-specific features. The overall contribution of this thesis is to address the key fundamental limitations of BCI systems. This includes the unyielding traditional experimentation procedure, the mandatory extended calibration stage and sustaining accurate signal decoding in real-time. These limitations lead to a fragile BCI system that is demanding to use and only suited for deployment in a controlled laboratory. Overall contributions of this research aim to improve the robustness of BCI systems and enable new applications for use in the real-world

    Applying cognitive electrophysiology to neural modelling of the attentional blink

    Get PDF
    This thesis proposes a connection between computational modelling of cognition and cognitive electrophysiology. We extend a previously published neural network model of working memory and temporal attention (Simultaneous Type Serial Token (ST2 ) model ; Bowman & Wyble, 2007) that was designed to simulate human behaviour during the attentional blink, an experimental nding that seems to illustrate the temporal limits of conscious perception in humans. Due to its neural architecture, we can utilise the ST2 model's functionality to produce so-called virtual event-related potentials (virtual ERPs) by averaging over activation proles of nodes in the network. Unlike predictions from textual models, the virtual ERPs from the ST2 model allow us to construe formal predictions concerning the EEG signal and associated cognitive processes in the human brain. The virtual ERPs are used to make predictions and propose explanations for the results of two experimental studies during which we recorded the EEG signal from the scalp of human participants. Using various analysis techniques, we investigate how target items are processed by the brain depending on whether they are presented individually or during the attentional blink. Particular emphasis is on the P3 component, which is commonly regarded as an EEG correlate of encoding items into working memory and thus seems to re ect conscious perception. Our ndings are interpreted to validate the ST2 model and competing theories of the attentional blink. Virtual ERPs also allow us to make predictions for future experiments. Hence, we show how virtual ERPs from the ST2 model provide a powerful tool for both experimental design and the validation of cognitive models

    Toward an Imagined Speech-Based Brain Computer Interface Using EEG Signals

    Get PDF
    Individuals with physical disabilities face difficulties in communication. A number of neuromuscular impairments could limit people from using available communication aids, because such aids require some degree of muscle movement. This makes brain–computer interfaces (BCIs) a potentially promising alternative communication technology for these people. Electroencephalographic (EEG) signals are commonly used in BCI systems to capture non-invasively the neural representations of intended, internal and imagined activities that are not physically or verbally evident. Examples include motor and speech imagery activities. Since 2006, researchers have become increasingly interested in classifying different types of imagined speech from EEG signals. However, the field still has a limited understanding of several issues, including experiment design, stimulus type, training, calibration and the examined features. The main aim of the research in this thesis is to advance automatic recognition of imagined speech using EEG signals by addressing a variety of issues that have not been solved in previous studies. These include (1)improving the discrimination between imagined speech versus non-speech tasks, (2) examining temporal parameters to optimise the recognition of imagined words and (3) providing a new feature extraction framework for improving EEG-based imagined speech recognition by considering temporal information after reducing within-session temporal non-stationarities. For the discrimination of speech versus non-speech, EEG data was collected during the imagination of randomly presented and semantically varying words. The non-speech tasks involved attention to visual stimuli and resting. Time-domain and spatio-spectral features were examined in different time intervals. Above-chance-level classification accuracies were achieved for each word and for groups of words compared to the non-speech tasks. To classify imagined words, EEG data related to the imagination of five words was collected. In addition to words classification, the impacts of experimental parameters on classification accuracy were examined. The optimization of these parameters is important to improve the rate and speed of recognizing unspoken speech in on-line applications. These parameters included using different training sizes, classification algorithms, feature extraction in different time intervals and the use of imagination time length as classification feature. Our extensive results showed that Random Forest classifier with features extracted using Discrete Wavelet Transform from 4 seconds fixed time frame EEG yielded that highest average classification of 87.93% in classification of five imagined words. To minimise within class temporal variations, a novel feature extraction framework based on dynamic time warping (DTW) was developed. Using linear discriminant analysis as the classifier, the proposed framework yielded an average 72.02% accuracy in the classification of imagined speech versus silence and 52.5% accuracy in the classification of five words. These results significantly outperformed a baseline configuration of state-of-the art time-domain features

    Applying cognitive electrophysiology to neural modelling of the attentional blink

    Get PDF
    This thesis proposes a connection between computational modelling of cognition and cognitive electrophysiology. We extend a previously published neural network model of working memory and temporal attention (Simultaneous Type Serial Token (ST2 ) model ; Bowman & Wyble, 2007) that was designed to simulate human behaviour during the attentional blink, an experimental nding that seems to illustrate the temporal limits of conscious perception in humans. Due to its neural architecture, we can utilise the ST2 model's functionality to produce so-called virtual event-related potentials (virtual ERPs) by averaging over activation proles of nodes in the network. Unlike predictions from textual models, the virtual ERPs from the ST2 model allow us to construe formal predictions concerning the EEG signal and associated cognitive processes in the human brain. The virtual ERPs are used to make predictions and propose explanations for the results of two experimental studies during which we recorded the EEG signal from the scalp of human participants. Using various analysis techniques, we investigate how target items are processed by the brain depending on whether they are presented individually or during the attentional blink. Particular emphasis is on the P3 component, which is commonly regarded as an EEG correlate of encoding items into working memory and thus seems to re ect conscious perception. Our ndings are interpreted to validate the ST2 model and competing theories of the attentional blink. Virtual ERPs also allow us to make predictions for future experiments. Hence, we show how virtual ERPs from the ST2 model provide a powerful tool for both experimental design and the validation of cognitive models.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
    corecore