55 research outputs found

    Speech Processes for Brain-Computer Interfaces

    Get PDF
    Speech interfaces have become widely used and are integrated in many applications and devices. However, speech interfaces require the user to produce intelligible speech, which might be hindered by loud environments, concern to bother bystanders or the general in- ability to produce speech due to disabilities. Decoding a usera s imagined speech instead of actual speech would solve this problem. Such a Brain-Computer Interface (BCI) based on imagined speech would enable fast and natural communication without the need to actually speak out loud. These interfaces could provide a voice to otherwise mute people. This dissertation investigates BCIs based on speech processes using functional Near In- frared Spectroscopy (fNIRS) and Electrocorticography (ECoG), two brain activity imaging modalities on opposing ends of an invasiveness scale. Brain activity data have low signal- to-noise ratio and complex spatio-temporal and spectral coherence. To analyze these data, techniques from the areas of machine learning, neuroscience and Automatic Speech Recog- nition are combined in this dissertation to facilitate robust classification of detailed speech processes while simultaneously illustrating the underlying neural processes. fNIRS is an imaging modality based on cerebral blood flow. It only requires affordable hardware and can be set up within minutes in a day-to-day environment. Therefore, it is ideally suited for convenient user interfaces. However, the hemodynamic processes measured by fNIRS are slow in nature and the technology therefore offers poor temporal resolution. We investigate speech in fNIRS and demonstrate classification of speech processes for BCIs based on fNIRS. ECoG provides ideal signal properties by invasively measuring electrical potentials artifact- free directly on the brain surface. High spatial resolution and temporal resolution down to millisecond sampling provide localized information with accurate enough timing to capture the fast process underlying speech production. This dissertation presents the Brain-to- Text system, which harnesses automatic speech recognition technology to decode a textual representation of continuous speech from ECoG. This could allow to compose messages or to issue commands through a BCI. While the decoding of a textual representation is unparalleled for device control and typing, direct communication is even more natural if the full expressive power of speech - including emphasis and prosody - could be provided. For this purpose, a second system is presented, which directly synthesizes neural signals into audible speech, which could enable conversation with friends and family through a BCI. Up to now, both systems, the Brain-to-Text and synthesis system are operating on audibly produced speech. To bridge the gap to the final frontier of neural prostheses based on imagined speech processes, we investigate the differences between audibly produced and imagined speech and present first results towards BCI from imagined speech processes. This dissertation demonstrates the usage of speech processes as a paradigm for BCI for the first time. Speech processes offer a fast and natural interaction paradigm which will help patients and healthy users alike to communicate with computers and with friends and family efficiently through BCIs

    Towards a wireless open source instrument: functional Near-Infrared Spectroscopy in mobile neuroergonomics and BCI applications

    Get PDF
    Brain-Computer Interfaces (BCIs) and neuroergonomics research have high requirements regarding robustness and mobility. Additionally, fast applicability and customization are desired. Functional Near-Infrared Spectroscopy (fNIRS) is an increasingly established technology with a potential to satisfy these conditions. EEG acquisition technology, currently one of the main modalities used for mobile brain activity assessment, is widely spread and open for access and thus easily customizable. fNIRS technology on the other hand has either to be bought as a predefined commercial solution or developed from scratch using published literature. To help reducing time and effort of future custom designs for research purposes, we present our approach toward an open source multichannel stand-alone fNIRS instrument for mobile NIRS-based neuroimaging, neuroergonomics and BCI/BMI applications. The instrument is low-cost, miniaturized, wireless and modular and openly documented on www.opennirs.org. It provides features such as scalable channel number, configurable regulated light intensities, programmable gain and lock-in amplification. In this paper, the system concept, hardware, software and mechanical implementation of the lightweight stand-alone instrument are presented and the evaluation and verification results of the instrument\u27s hardware and physiological fNIRS functionality are described. Its capability to measure brain activity is demonstrated by qualitative signal assessments and a quantitative mental arithmetic based BCI study with 12 subjects

    Synthesizing Speech from Intracranial Depth Electrodes using an Encoder-Decoder Framework

    Full text link
    Speech Neuroprostheses have the potential to enable communication for people with dysarthria or anarthria. Recent advances have demonstrated high-quality text decoding and speech synthesis from electrocorticographic grids placed on the cortical surface. Here, we investigate a less invasive measurement modality in three participants, namely stereotactic EEG (sEEG) that provides sparse sampling from multiple brain regions, including subcortical regions. To evaluate whether sEEG can also be used to synthesize high-quality audio from neural recordings, we employ a recurrent encoder-decoder model based on modern deep learning methods. We find that speech can indeed be reconstructed with correlations up to 0.8 from these minimally invasive recordings, despite limited amounts of training data

    Decoding executed and imagined grasping movements from distributed non-motor brain areas using a Riemannian decoder

    Get PDF
    Using brain activity directly as input for assistive tool control can circumventmuscular dysfunction and increase functional independence for physically impaired people. The motor cortex is commonly targeted for recordings, while growing evidence shows that there exists decodable movement-related neural activity outside of the motor cortex. Several decoding studies demonstrated significant decoding from distributed areas separately. Here, we combine information from all recorded non-motor brain areas and decode executed and imagined movements using a Riemannian decoder. We recorded neural activity from 8 epilepsy patients implanted with stereotactic-electroencephalographic electrodes (sEEG), while they performed an executed and imagined grasping tasks. Before decoding, we excluded all contacts in or adjacent to the central sulcus. The decoder extracts a low-dimensional representation of varying number of components, and classified move/no-move using a minimum-distance-to-geometric-mean Riemannian classifier. We show that executed and imagined movements can be decoded from distributed non-motor brain areas using a Riemannian decoder, reaching an area under the receiver operator characteristic of 0.83 ± 0.11. Furthermore, we highlight the distributedness of the movement-related neural activity, as no single brain area is the main driver of performance. Our decoding results demonstrate a first application of a Riemannian decoder on sEEG data and show that it is able to decode from distributed brain-wide recordings outside of the motor cortex. This brief report highlights the perspective to explore motor-related neural activity beyond the motor cortex, as many areas contain decodable information.</p

    Decoding executed and imagined grasping movements from distributed non-motor brain areas using a Riemannian decoder

    Get PDF
    Using brain activity directly as input for assistive tool control can circumventmuscular dysfunction and increase functional independence for physically impaired people. The motor cortex is commonly targeted for recordings, while growing evidence shows that there exists decodable movement-related neural activity outside of the motor cortex. Several decoding studies demonstrated significant decoding from distributed areas separately. Here, we combine information from all recorded non-motor brain areas and decode executed and imagined movements using a Riemannian decoder. We recorded neural activity from 8 epilepsy patients implanted with stereotactic-electroencephalographic electrodes (sEEG), while they performed an executed and imagined grasping tasks. Before decoding, we excluded all contacts in or adjacent to the central sulcus. The decoder extracts a low-dimensional representation of varying number of components, and classified move/no-move using a minimum-distance-to-geometric-mean Riemannian classifier. We show that executed and imagined movements can be decoded from distributed non-motor brain areas using a Riemannian decoder, reaching an area under the receiver operator characteristic of 0.83 ± 0.11. Furthermore, we highlight the distributedness of the movement-related neural activity, as no single brain area is the main driver of performance. Our decoding results demonstrate a first application of a Riemannian decoder on sEEG data and show that it is able to decode from distributed brain-wide recordings outside of the motor cortex. This brief report highlights the perspective to explore motor-related neural activity beyond the motor cortex, as many areas contain decodable information

    Brain-to-text: Decoding spoken phrases from phone representations in the brain

    Get PDF
    It has long been speculated whether communication between humans and machines based on natural speech related cortical activity is possible. Over the past decade, studies have suggested that it is feasible to recognize isolated aspects of speech from neural signals, such as auditory features, phones or one of a few isolated words. However, until now it remained an unsolved challenge to decode continuously spoken speech from the neural substrate associated with speech and language processing. Here, we show for the first time that continuously spoken speech can be decoded into the expressed words from intracranial electrocorticographic (ECoG) recordings. Specifically, we implemented a system, which we call Brain-To-Text that models single phones, employs techniques from automatic speech recognition (ASR), and thereby transforms brain activity while speaking into the corresponding textual representation. Our results demonstrate that our system can achieve word error rates as low as 25% and phone error rates below 50%. Additionally, our approach contributes to the current understanding of the neural basis of continuous speech production by identifying those cortical regions that hold substantial information about individual phones. In conclusion, the Brain-To-Text system described in this paper represents an important step toward human-machine communication based on imagined speech

    Synthesizing Speech from Intracranial Depth Electrodes using an Encoder-Decoder Framework

    No full text

    Arbitrary and informed decision making

    No full text
    corecore