557 research outputs found

    A Channel Selection Approach Based on Convolutional Neural Network for Multi-channel EEG Motor Imagery Decoding

    Get PDF
    For many disabled people, brain computer interface (BCI) may be the only way to communicate with others and to control things around them. Using motor imagery paradigm, one can decode an individual's intention by using their brainwaves to help them interact with their environment without having to make any physical movement. For decades, machine learning models, trained on features extracted from acquired electroencephalogram (EEG) signals have been used to decode motor imagery activities. This method has several limitations and constraints especially during feature extraction. Large number of channels on the current EEG devices make them hard to use in real-life as they are bulky, uncomfortable to wear, and takes lot of time in preparation. In this paper, we introduce a technique to perform channel selection using convolutional neural network (CNN) and to decode multiple classes of motor imagery intentions from four participants who are amputees. A CNN model trained on EEG data of 64 channels achieved a mean classification accuracy of 99.7% with five classes. Channel selection based on weights extracted from the trained model has been performed with subsequent models trained on eight selected channels achieved a reasonable accuracy of 91.5%. Training the model in time domain and frequency domain was also compared, different window sizes were experimented to test the possibilities of realtime application. Our method of channel selection was then evaluated on a publicly available motor imagery EEG dataset

    Converting Your Thoughts to Texts: Enabling Brain Typing via Deep Feature Learning of EEG Signals

    Full text link
    An electroencephalography (EEG) based Brain Computer Interface (BCI) enables people to communicate with the outside world by interpreting the EEG signals of their brains to interact with devices such as wheelchairs and intelligent robots. More specifically, motor imagery EEG (MI-EEG), which reflects a subjects active intent, is attracting increasing attention for a variety of BCI applications. Accurate classification of MI-EEG signals while essential for effective operation of BCI systems, is challenging due to the significant noise inherent in the signals and the lack of informative correlation between the signals and brain activities. In this paper, we propose a novel deep neural network based learning framework that affords perceptive insights into the relationship between the MI-EEG data and brain activities. We design a joint convolutional recurrent neural network that simultaneously learns robust high-level feature presentations through low-dimensional dense embeddings from raw MI-EEG signals. We also employ an Autoencoder layer to eliminate various artifacts such as background activities. The proposed approach has been evaluated extensively on a large- scale public MI-EEG dataset and a limited but easy-to-deploy dataset collected in our lab. The results show that our approach outperforms a series of baselines and the competitive state-of-the- art methods, yielding a classification accuracy of 95.53%. The applicability of our proposed approach is further demonstrated with a practical BCI system for typing.Comment: 10 page

    Multiattention Adaptation Network for Motor Imagery Recognition

    Get PDF
    This work was supported in part by the National Natural Science Foundation of China under Grants Nos. 61873181 and 61922062Peer reviewedPostprin
    • …
    corecore