12 research outputs found

    Brain-to-text: Decoding spoken phrases from phone representations in the brain

    Get PDF
    It has long been speculated whether communication between humans and machines based on natural speech related cortical activity is possible. Over the past decade, studies have suggested that it is feasible to recognize isolated aspects of speech from neural signals, such as auditory features, phones or one of a few isolated words. However, until now it remained an unsolved challenge to decode continuously spoken speech from the neural substrate associated with speech and language processing. Here, we show for the first time that continuously spoken speech can be decoded into the expressed words from intracranial electrocorticographic (ECoG) recordings. Specifically, we implemented a system, which we call Brain-To-Text that models single phones, employs techniques from automatic speech recognition (ASR), and thereby transforms brain activity while speaking into the corresponding textual representation. Our results demonstrate that our system can achieve word error rates as low as 25% and phone error rates below 50%. Additionally, our approach contributes to the current understanding of the neural basis of continuous speech production by identifying those cortical regions that hold substantial information about individual phones. In conclusion, the Brain-To-Text system described in this paper represents an important step toward human-machine communication based on imagined speech

    Psychophysical magic: rendering the "invisible" visible

    No full text
    Conscious and unconscious processing are two of the most hot topics in neuroscience. In unconscious processing research, usually a direct measure is used to show that a target is not consciously perceived. An indirect measure is used to show that still the target can influence visual processing. This unconscious processing is usually related to fast motor priming in a short period before conscious processing takes over. Here, I show that features of unconscious elements can influence visual information processing and are not related to motor priming. Interestingly, these features can become conscious even though the carriers of the features are no

    Harnessing Electrocorticographic Signals for Neuroscience and Neurosurgery

    No full text
    Daily human activities, such as speaking, driving or listening to music, are produced by activations of neurons in the brain. Where, when and how these activations occur has been the subject of intense debate for the last decades. Traditional techniques to image the human brain, such as functional magnetic resonance imaging (fMRI) or electroencephalography (EEG), only provide limited information regarding where and when these activations take place. For that reason, critical information is currently missing regarding how neurons from different parts of the brain interact and coordinate their activity to implement behavior. This information is critical to understand human behavior and to develop new medical diagnostic and treatment options for neurological disorders that compromise behavior such as epilepsy or brain tumors. Recently, electrocorticography (ECoG) has been shown to provide an unprecedented opportunity to image the subtle dynamics of the human brain in action. ECoG is a technique traditionally used in the treatment of epileptic patients, and consists of recording brain signals from arrays of electrodes placed directly on the surface of the brain. The high quality of signals recorded with ECoG allows neuroscientists to investigate the temporally and spatially precise activation of groups of neurons in the human brain. In this dissertation, we take advantage of the possibilities offered by ECoG imaging to derive novel understanding of the precise temporal and spatial coordination of neuronal activity during behavior. We demonstrate that different parts of the brain dynamically interact and coordinate their activity to implement behavior. Furthermore, we translate our findings into two novel clinical applications that build on existing neurological procedures to treat patients suffering from epilepsy and brain tumors. The proposed applications improve the speed and safety of existing procedures and expand the number and type of patients that can benefit from them. Together, our results advance our understanding of the mechanisms implementing the coordination of the different brain regions necessary to produce behavior, and open new avenues for the development of safer clinical tools to treat those neurological disorders that compromise behavior

    Asynchronous decoding of finger movements from ECoG signals using long-range dependencies conditional random fields

    No full text
    Objective. In this work we propose the use of conditional random fields with long-range dependencies for the classification of finger movements from electrocorticographic recordings. Approach. The proposed method uses long-range dependencies taking into consideration time-lags between the brain activity and the execution of the motor task. In addition, the proposed method models the dynamics of the task executed by the subject and uses information about these dynamics as prior information during the classification stage. Main results. The results show that incorporating temporal information about the executed task as well as incorporating long-range dependencies between the brain signals and the labels effectively increases the system's classification performance compared to methods in the state of art. Significance. The method proposed in this work makes use of probabilistic graphical models to incorporate temporal information in the classification of finger movements from electrocorticographic recordings. The proposed method highlights the importance of including prior information about the task that the subjects execute. As the results show, the combination of these two features effectively produce a significant improvement of the system's classification performance

    Word-level language modeling for P300 spellers based on discriminative graphical models

    No full text
    Objective. In this work we propose a probabilistic graphical model framework that uses language priors at the level of words as a mechanism to increase the performance of P300-based spellers. Approach. This paper is concerned with brain-computer interfaces based on P300 spellers. Motivated by P300 spelling scenarios involving communication based on a limited vocabulary, we propose a probabilistic graphical model framework and an associated classification algorithm that uses learned statistical models of language at the level of words. Exploiting such high-level contextual information helps reduce the error rate of the speller. Main results. Our experimental results demonstrate that the proposed approach offers several advantages over existing methods. Most importantly, it increases the classification accuracy while reducing the number of times the letters need to be flashed, increasing the communication rate of the system. Significance. The proposed approach models all the variables in the P300 speller in a unified framework and has the capability to correct errors in previous letters in a word, given the data for the current one. The structure of the model we propose allows the use of efficient inference algorithms, which in turn makes it possible to use this approach in real-time applications

    A probabilistic graphical model for word-level language modeling in P300 spellers

    No full text
    Motivated by P300 spelling scenarios involving communication based on a limited vocabulary, we propose a probabilistic graphical model-based framework and an associated classification algorithm that uses learned statistical prior models of language at the level of words. Exploiting such high-level contextual information helps reduce the error rate of the speller. The proposed approach models all the variables in the P300 speller in a unified framework and has the capability to correct errors in previous letters in a word given the data for the current one. The structure of our model allows the use of efficient inference algorithms, which makes it possible to use this approach in real-time applications. Our experimental results demonstrate the advantages of the proposed method

    Intraoperative mapping of expressive language cortex using passive real-time electrocorticography

    Get PDF
    In this case report, we investigated the utility and practicality of passive intraoperative functional mapping of expressive language cortex using high-resolution electrocorticography (ECoG). The patient presented here experienced new-onset seizures caused by a medium-grade tumor in very close proximity to expressive language regions. In preparation of tumor resection, the patient underwent multiple functional language mapping procedures. We examined the relationship of results obtained with intraoperative high-resolution ECoG, extraoperative ECoG utilizing a conventional subdural grid, extraoperative electrical cortical stimulation (ECS) mapping, and functional magnetic resonance imaging (fMRI). Our results demonstrate that intraoperative mapping using high-resolution ECoG is feasible and, within minutes, produces results that are qualitatively concordant to those achieved by extraoperative mapping modalities. They also suggest that functional language mapping of expressive language areas with ECoG may prove useful in many intraoperative conditions given its time efficiency and safety. Finally, they demonstrate that integration of results from multiple functional mapping techniques, both intraoperative and extraoperative, may serve to improve the confidence in or precision of functional localization when pathology encroaches upon eloquent language cortex
    corecore