526 research outputs found

    Towards Neural Decoding of Imagined Speech based on Spoken Speech

    Full text link
    Decoding imagined speech from human brain signals is a challenging and important issue that may enable human communication via brain signals. While imagined speech can be the paradigm for silent communication via brain signals, it is always hard to collect enough stable data to train the decoding model. Meanwhile, spoken speech data is relatively easy and to obtain, implying the significance of utilizing spoken speech brain signals to decode imagined speech. In this paper, we performed a preliminary analysis to find out whether if it would be possible to utilize spoken speech electroencephalography data to decode imagined speech, by simply applying the pre-trained model trained with spoken speech brain signals to decode imagined speech. While the classification performance of imagined speech data solely used to train and validation was 30.5 %, the transferred performance of spoken speech based classifier to imagined speech data displayed average accuracy of 26.8 % which did not have statistically significant difference compared to the imagined speech based classifier (p = 0.0983, chi-square = 4.64). For more comprehensive analysis, we compared the result with the visual imagery dataset, which would naturally be less related to spoken speech compared to the imagined speech. As a result, visual imagery have shown solely trained performance of 31.8 % and transferred performance of 26.3 % which had shown statistically significant difference between each other (p = 0.022, chi-square = 7.64). Our results imply the potential of applying spoken speech to decode imagined speech, as well as their underlying common features.Comment: 4 pages, 2 figure

    Improving Electroencephalography-Based Imagined Speech Recognition with a Simultaneous Video Data Stream

    Get PDF
    Electroencephalography (EEG) devices offer a non-invasive mechanism for implementing imagined speech recognition, the process of estimating words or commands that a person expresses only in thought. However, existing methods can only achieve limited predictive accuracy with very small vocabularies; and therefore are not yet sufficient to enable fluid communication between humans and machines. This project proposes a new method for improving the ability of a classifying algorithm to recognize imagined speech recognition, by collecting and analyzing a large dataset of simultaneous EEG and video data streams. The results from this project suggest confirmation that complementing high-dimensional EEG data with similarly high-dimensional video data enhances a classifier’s ability to extract features from an EEG stream and facilitate imagined speech recognition

    Hierarchical Deep Feature Learning For Decoding Imagined Speech From EEG

    Full text link
    We propose a mixed deep neural network strategy, incorporating parallel combination of Convolutional (CNN) and Recurrent Neural Networks (RNN), cascaded with deep autoencoders and fully connected layers towards automatic identification of imagined speech from EEG. Instead of utilizing raw EEG channel data, we compute the joint variability of the channels in the form of a covariance matrix that provide spatio-temporal representations of EEG. The networks are trained hierarchically and the extracted features are passed onto the next network hierarchy until the final classification. Using a publicly available EEG based speech imagery database we demonstrate around 23.45% improvement of accuracy over the baseline method. Our approach demonstrates the promise of a mixed DNN approach for complex spatial-temporal classification problems.Comment: Accepted in AAAI 2019 under Student Abstract and Poster Progra

    Neurolinguistics Research Advancing Development of a Direct-Speech Brain-Computer Interface

    Get PDF
    A direct-speech brain-computer interface (DS-BCI) acquires neural signals corresponding to imagined speech, then processes and decodes these signals to produce a linguistic output in the form of phonemes, words, or sentences. Recent research has shown the potential of neurolinguistics to enhance decoding approaches to imagined speech with the inclusion of semantics and phonology in experimental procedures. As neurolinguistics research findings are beginning to be incorporated within the scope of DS-BCI research, it is our view that a thorough understanding of imagined speech, and its relationship with overt speech, must be considered an integral feature of research in this field. With a focus on imagined speech, we provide a review of the most important neurolinguistics research informing the field of DS-BCI and suggest how this research may be utilized to improve current experimental protocols and decoding techniques. Our review of the literature supports a cross-disciplinary approach to DS-BCI research, in which neurolinguistics concepts and methods are utilized to aid development of a naturalistic mode of communication. : Cognitive Neuroscience; Computer Science; Hardware Interface Subject Areas: Cognitive Neuroscience, Computer Science, Hardware Interfac
    • …
    corecore