9 research outputs found

    Direct Classification of All American English Phonemes Using Signals From Functional Speech Motor Cortex

    Get PDF
    Although brain-computer interfaces (BCIs) can be used in several different ways to restore communication, communicative BCI has not approached the rate or efficiency of natural human speech. Electrocorticography (ECoG) has precise spatiotemporal resolution that enables recording of brain activity distributed over a wide area of cortex, such as during speech production. In this study, we investigated words that span the entire set of phonemes in the General American accent using ECoG with 4 subjects. We classified phonemes with up to 36% accuracy when classifying all phonemes and up to 63% accuracy for a single phoneme. Further, misclassified phonemes follow articulation organization described in phonology literature, aiding classification of whole words. Precise temporal alignment to phoneme onset was crucial for classification success. We identified specific spatiotemporal features that aid classification, which could guide future applications. Word identification was equivalent to information transfer rates as high as 3.0 bits/s (33.6 words min), supporting pursuit of speech articulation for BCI control

    Investigation of Speech for Communicative Brain-Computer Interface

    No full text
    Recent successes in decoding speech from cortical signals provide hope for restoring function to those who have lost the ability to speak normally. Despite these successes, the exact cortical representation and functional dynamics of speech production remain unknown. Prominent theoretical models of speech production in the literature differ in their hypothesized functional organization of speech motor cortex. Using electrocorticography, with its fine spatial and temporal resolution, we can analyze the exact spatial and temporal cortical dynamics related to complex speech mechanisms. This dissertation addresses various unknowns in the current speech brain-computer interface literature and recommends a methodology for successful speech classification from electrocorticographic electrodes. Addressing the current limitations and barriers to widespread BCI adoption, I here seek to add to the engineering merit of the communicative BCI field with the mechanistic analysis and results of three separate studies. In the first study, I seek to determine what factors contribute to successful phonemic decoding of an ECoG signal. In the second study, I seek to determine cortical representation of phonemic categorization in speech production. In the third study, I leverage classification results to address the structure of cortical correlates of speech production. The result of these studies outlines a set of guidelines for future speech-BCI research that will work towards useful speech-BCI neuroprothetics

    Generating Natural, Intelligible Speech From Brain Activity in Motor, Premotor, and Inferior Frontal Cortices

    No full text
    Neural interfaces that directly produce intelligible speech from brain activity would allow people with severe impairment from neurological disorders to communicate more naturally. Here, we record neural population activity in motor, premotor and inferior frontal cortices during speech production using electrocorticography (ECoG) and show that ECoG signals alone can be used to generate intelligible speech output that can preserve conversational cues. To produce speech directly from neural data, we adapted a method from the field of speech synthesis called unit selection, in which units of speech are concatenated to form audible output. In our approach, which we call Brain-To-Speech, we chose subsequent units of speech based on the measured ECoG activity to generate audio waveforms directly from the neural recordings. Brain-To-Speech employed the user's own voice to generate speech that sounded very natural and included features such as prosody and accentuation. By investigating the brain areas involved in speech production separately, we found that speech motor cortex provided more information for the reconstruction process than the other cortical areas

    Action prediction in younger versus older adults: Neural correlates of motor familiarity

    Get PDF
    Contains fulltext : 116607.pdf (publisher's version ) (Open Access)Generating predictions during action observation is essential for efficient navigation through our social environment. With age, the sensitivity in action prediction declines. In younger adults, the action observation network (AON), consisting of premotor, parietal and occipitotemporal cortices, has been implicated in transforming executed and observed actions into a common code. Much less is known about age-related changes in the neural representation of observed actions. Using fMRI, the present study measured brain activity in younger and older adults during the prediction of temporarily occluded actions (figure skating elements and simple movement exercises). All participants were highly familiar with the movement exercises whereas only some participants were experienced figure skaters. With respect to the AON, the results confirm that this network was preferentially engaged for the more familiar movement exercises. Compared to younger adults, older adults recruited visual regions to perform the task and, additionally, the hippocampus and caudate when the observed actions were familiar to them. Thus, instead of effectively exploiting the sensorimotor matching properties of the AON, older adults seemed to rely predominantly on the visual dynamics of the observed actions to perform the task. Our data further suggest that the caudate played an important role during the prediction of the less familiar figure skating elements in better-performing groups. Together, these findings show that action prediction engages a distributed network in the brain, which is modulated by the content of the observed actions and the age and experience of the observer.15 p

    5th International Symposium on Focused Ultrasound

    No full text
    corecore