314 research outputs found

    A bimodal deep learning architecture for EEGfNIRS decoding of overt and imagined speech

    Get PDF

    Deep-MEG: spatiotemporal CNN features and multiband ensemble classification for predicting the early signs of Alzheimer's disease with magnetoencephalography

    Get PDF
    AbstractIn this paper, we present the novel Deep-MEG approach in which image-based representations of magnetoencephalography (MEG) data are combined with ensemble classifiers based on deep convolutional neural networks. For the scope of predicting the early signs of Alzheimer's disease (AD), functional connectivity (FC) measures between the brain bio-magnetic signals originated from spatially separated brain regions are used as MEG data representations for the analysis. After stacking the FC indicators relative to different frequency bands into multiple images, a deep transfer learning model is used to extract different sets of deep features and to derive improved classification ensembles. The proposed Deep-MEG architectures were tested on a set of resting-state MEG recordings and their corresponding magnetic resonance imaging scans, from a longitudinal study involving 87 subjects. Accuracy values of 89% and 87% were obtained, respectively, for the early prediction of AD conversion in a sample of 54 mild cognitive impairment subjects and in a sample of 87 subjects, including 33 healthy controls. These results indicate that the proposed Deep-MEG approach is a powerful tool for detecting early alterations in the spectral–temporal connectivity profiles and in their spatial relationships

    Deep fusion of multi-channel neurophysiological signal for emotion recognition and monitoring

    Get PDF
    How to fuse multi-channel neurophysiological signals for emotion recognition is emerging as a hot research topic in community of Computational Psychophysiology. Nevertheless, prior feature engineering based approaches require extracting various domain knowledge related features at a high time cost. Moreover, traditional fusion method cannot fully utilise correlation information between different channels and frequency components. In this paper, we design a hybrid deep learning model, in which the 'Convolutional Neural Network (CNN)' is utilised for extracting task-related features, as well as mining inter-channel and inter-frequency correlation, besides, the 'Recurrent Neural Network (RNN)' is concatenated for integrating contextual information from the frame cube sequence. Experiments are carried out in a trial-level emotion recognition task, on the DEAP benchmarking dataset. Experimental results demonstrate that the proposed framework outperforms the classical methods, with regard to both of the emotional dimensions of Valence and Arousal

    A Python-based Brain-Computer Interface Package for Neural Data Analysis

    Get PDF
    Anowar, Md Hasan, A Python-based Brain-Computer Interface Package for Neural Data Analysis. Master of Science (MS), December, 2020, 70 pp., 4 tables, 23 figures, 74 references. Although a growing amount of research has been dedicated to neural engineering, only a handful of software packages are available for brain signal processing. Popular brain-computer interface packages depend on commercial software products such as MATLAB. Moreover, almost every brain-computer interface software is designed for a specific neuro-biological signal; there is no single Python-based package that supports motor imagery, sleep, and stimulated brain signal analysis. The necessity to introduce a brain-computer interface package that can be a free alternative for commercial software has motivated me to develop a toolbox using the python platform. In this thesis, the structure of MEDUSA, a brain-computer interface toolbox, is presented. The features of the toolbox are demonstrated with publicly available data sources. The MEDUSA toolbox provides a valuable tool to biomedical engineers and computational neuroscience researchers

    Large-scale Foundation Models and Generative AI for BigData Neuroscience

    Full text link
    Recent advances in machine learning have made revolutionary breakthroughs in computer games, image and natural language understanding, and scientific discovery. Foundation models and large-scale language models (LLMs) have recently achieved human-like intelligence thanks to BigData. With the help of self-supervised learning (SSL) and transfer learning, these models may potentially reshape the landscapes of neuroscience research and make a significant impact on the future. Here we present a mini-review on recent advances in foundation models and generative AI models as well as their applications in neuroscience, including natural language and speech, semantic memory, brain-machine interfaces (BMIs), and data augmentation. We argue that this paradigm-shift framework will open new avenues for many neuroscience research directions and discuss the accompanying challenges and opportunities
    • …
    corecore