283 research outputs found

    Classification of Overt and Covert Speech for Near-Infrared Spectroscopy-Based Brain Computer Interface

    Get PDF
    Published: 7 September 2018People suffering from neuromuscular disorders such as locked-in syndrome (LIS) are left in a paralyzed state with preserved awareness and cognition. In this study, it was hypothesized that changes in local hemodynamic activity, due to the activation of Broca’s area during overt/covert speech, can be harnessed to create an intuitive Brain Computer Interface based on Near-Infrared Spectroscopy (NIRS). A 12-channel square template was used to cover inferior frontal gyrus and changes in hemoglobin concentration corresponding to six aloud (overtly) and six silently (covertly) spoken words were collected from eight healthy participants. An unsupervised feature extraction algorithm was implemented with an optimized support vector machine for classification. For all participants, when considering overt and covert classes regardless of words, classification accuracy of 92.88 18.49% was achieved with oxy-hemoglobin (O2Hb) and 95.14 5.39% with deoxy-hemoglobin (HHb) as a chromophore. For a six-active-class problem of overtly spoken words, 88.19 7.12% accuracy was achieved for O2Hb and 78.82 15.76% for HHb. Similarly, for a six-active-class classification of covertly spoken words, 79.17 14.30% accuracy was achieved with O2Hb and 86.81 9.90% with HHb as an absorber. These results indicate that a control paradigm based on covert speech can be reliably implemented into future Brain–Computer Interfaces (BCIs) based on NIRSThis research received no external funding

    A bimodal deep learning architecture for EEGfNIRS decoding of overt and imagined speech

    Get PDF

    Speech Processes for Brain-Computer Interfaces

    Get PDF
    Speech interfaces have become widely used and are integrated in many applications and devices. However, speech interfaces require the user to produce intelligible speech, which might be hindered by loud environments, concern to bother bystanders or the general in- ability to produce speech due to disabilities. Decoding a usera s imagined speech instead of actual speech would solve this problem. Such a Brain-Computer Interface (BCI) based on imagined speech would enable fast and natural communication without the need to actually speak out loud. These interfaces could provide a voice to otherwise mute people. This dissertation investigates BCIs based on speech processes using functional Near In- frared Spectroscopy (fNIRS) and Electrocorticography (ECoG), two brain activity imaging modalities on opposing ends of an invasiveness scale. Brain activity data have low signal- to-noise ratio and complex spatio-temporal and spectral coherence. To analyze these data, techniques from the areas of machine learning, neuroscience and Automatic Speech Recog- nition are combined in this dissertation to facilitate robust classification of detailed speech processes while simultaneously illustrating the underlying neural processes. fNIRS is an imaging modality based on cerebral blood flow. It only requires affordable hardware and can be set up within minutes in a day-to-day environment. Therefore, it is ideally suited for convenient user interfaces. However, the hemodynamic processes measured by fNIRS are slow in nature and the technology therefore offers poor temporal resolution. We investigate speech in fNIRS and demonstrate classification of speech processes for BCIs based on fNIRS. ECoG provides ideal signal properties by invasively measuring electrical potentials artifact- free directly on the brain surface. High spatial resolution and temporal resolution down to millisecond sampling provide localized information with accurate enough timing to capture the fast process underlying speech production. This dissertation presents the Brain-to- Text system, which harnesses automatic speech recognition technology to decode a textual representation of continuous speech from ECoG. This could allow to compose messages or to issue commands through a BCI. While the decoding of a textual representation is unparalleled for device control and typing, direct communication is even more natural if the full expressive power of speech - including emphasis and prosody - could be provided. For this purpose, a second system is presented, which directly synthesizes neural signals into audible speech, which could enable conversation with friends and family through a BCI. Up to now, both systems, the Brain-to-Text and synthesis system are operating on audibly produced speech. To bridge the gap to the final frontier of neural prostheses based on imagined speech processes, we investigate the differences between audibly produced and imagined speech and present first results towards BCI from imagined speech processes. This dissertation demonstrates the usage of speech processes as a paradigm for BCI for the first time. Speech processes offer a fast and natural interaction paradigm which will help patients and healthy users alike to communicate with computers and with friends and family efficiently through BCIs

    Physiologically attentive user interface for robot teleoperation: real time emotional state estimation and interface modification using physiology, facial expressions and eye movements

    Get PDF
    We developed a framework for Physiologically Attentive User Interfaces, to reduce the interaction gap between humans and machines in life critical robot teleoperations. Our system utilizes emotional state awareness capabilities of psychophysiology and classifies three emotional states (Resting, Stress, and Workload) by analysing physiological data along with facial expression and eye movement analysis. This emotional state estimation is then used to create a dynamic interface that updates in real time with respect to user’s emotional state. The results of a preliminary evaluation of the developed emotional state classifier for robot teleoperation are presented, along with its future possibilities are discussed.info:eu-repo/semantics/acceptedVersio

    Speech Recognition via fNIRS Based Brain Signals

    Get PDF
    In this paper, we present the first evidence that perceived speech can be identified from the listeners' brain signals measured via functional-near infrared spectroscopy (fNIRS)—a non-invasive, portable, and wearable neuroimaging technique suitable for ecologically valid settings. In this study, participants listened audio clips containing English stories while prefrontal and parietal cortices were monitored with fNIRS. Machine learning was applied to train predictive models using fNIRS data from a subject pool to predict which part of a story was listened by a new subject not in the pool based on the brain's hemodynamic response as measured by fNIRS. fNIRS signals can vary considerably from subject to subject due to the different head size, head shape, and spatial locations of brain functional regions. To overcome this difficulty, a generalized canonical correlation analysis (GCCA) was adopted to extract latent variables that are shared among the listeners before applying principal component analysis (PCA) for dimension reduction and applying logistic regression for classification. A 74.7% average accuracy has been achieved for differentiating between two 50 s. long story segments and a 43.6% average accuracy has been achieved for differentiating four 25 s. long story segments. These results suggest the potential of an fNIRS based-approach for building a speech decoding brain-computer-interface for developing a new type of neural prosthetic system

    Neurolinguistics Research Advancing Development of a Direct-Speech Brain-Computer Interface

    Get PDF
    A direct-speech brain-computer interface (DS-BCI) acquires neural signals corresponding to imagined speech, then processes and decodes these signals to produce a linguistic output in the form of phonemes, words, or sentences. Recent research has shown the potential of neurolinguistics to enhance decoding approaches to imagined speech with the inclusion of semantics and phonology in experimental procedures. As neurolinguistics research findings are beginning to be incorporated within the scope of DS-BCI research, it is our view that a thorough understanding of imagined speech, and its relationship with overt speech, must be considered an integral feature of research in this field. With a focus on imagined speech, we provide a review of the most important neurolinguistics research informing the field of DS-BCI and suggest how this research may be utilized to improve current experimental protocols and decoding techniques. Our review of the literature supports a cross-disciplinary approach to DS-BCI research, in which neurolinguistics concepts and methods are utilized to aid development of a naturalistic mode of communication. : Cognitive Neuroscience; Computer Science; Hardware Interface Subject Areas: Cognitive Neuroscience, Computer Science, Hardware Interfac

    Wearable brain computer interfaces with near infrared spectroscopy

    Full text link
    Brain computer interfaces (BCIs) are devices capable of relaying information directly from the brain to a digital device. BCIs have been proposed for a diverse range of clinical and commercial applications; for example, to allow paralyzed subjects to communicate, or to improve machine human interactions. At their core, BCIs need to predict the current state of the brain from variables measuring functional physiology. Functional near infrared spectroscopy (fNIRS) is a non-invasive optical technology able to measure hemodynamic changes in the brain. Along with electroencephalography (EEG), fNIRS is the only technique that allows non-invasive and portable sensing of brain signals. Portability and wearability are very desirable characteristics for BCIs, as they allow them to be used in contexts beyond the laboratory, extending their usability for clinical and commercial applications, as well as for ecologically valid research. Unfortunately, due to limited access to the brain, non-invasive BCIs tend to suffer from low accuracy in their estimation of the brain state. It has been suggested that feedback could increase BCI accuracy as the brain normally relies on sensory feedback to adjust its strategies. Despite this, presenting relevant and accurate feedback in a timely manner can be challenging when processing fNIRS signals, as they tend to be contaminated by physiological and motion artifacts. In this dissertation, I present the hardware and software solutions we proposed and developed to deal with these challenges. First, I will talk about ninjaNIRS, the wearable open source fNIRS device we developed in our laboratory, which could help fNIRS neuroscience and BCIs to become more accessible. Next, I will present an adaptive filter strategy to recover the neural responses from fNIRS signals in real-time, which could be used for feedback and classification in a BCI paradigm. We showed that our wearable fNIRS device can operate autonomously for up to three hours and can be easily carried in a backpack, while offering noise equivalent power comparable to commercial devices. Our adaptive multimodal Kalman filter strategy provided a six-fold increase in contrast to noise ratio of the brain signals compared to standard filtering while being able to process at least 24 channels at 400 samples per second using a standard computer. This filtering strategy, along with visual feedback during a left vs right motion imagery task, showed a relative increase of accuracy of 37.5% compared to not using feedback. With this, we show that it is possible to present relevant feedback for fNIRS BCI in real-time. The findings on this dissertation might help improve the design of future fNIRS BCIs, and thus increase the usability and reliability of this technology
    • …
    corecore