1,480 research outputs found

    Understanding language-elicited EEG data by predicting it from a fine-tuned language model

    Full text link
    Electroencephalography (EEG) recordings of brain activity taken while participants read or listen to language are widely used within the cognitive neuroscience and psycholinguistics communities as a tool to study language comprehension. Several time-locked stereotyped EEG responses to word-presentations -- known collectively as event-related potentials (ERPs) -- are thought to be markers for semantic or syntactic processes that take place during comprehension. However, the characterization of each individual ERP in terms of what features of a stream of language trigger the response remains controversial. Improving this characterization would make ERPs a more useful tool for studying language comprehension. We take a step towards better understanding the ERPs by fine-tuning a language model to predict them. This new approach to analysis shows for the first time that all of the ERPs are predictable from embeddings of a stream of language. Prior work has only found two of the ERPs to be predictable. In addition to this analysis, we examine which ERPs benefit from sharing parameters during joint training. We find that two pairs of ERPs previously identified in the literature as being related to each other benefit from joint training, while several other pairs of ERPs that benefit from joint training are suggestive of potential relationships. Extensions of this analysis that further examine what kinds of information in the model embeddings relate to each ERP have the potential to elucidate the processes involved in human language comprehension.Comment: To appear in Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistic

    Open Vocabulary Electroencephalography-To-Text Decoding and Zero-shot Sentiment Classification

    Full text link
    State-of-the-art brain-to-text systems have achieved great success in decoding language directly from brain signals using neural networks. However, current approaches are limited to small closed vocabularies which are far from enough for natural communication. In addition, most of the high-performing approaches require data from invasive devices (e.g., ECoG). In this paper, we extend the problem to open vocabulary Electroencephalography(EEG)-To-Text Sequence-To-Sequence decoding and zero-shot sentence sentiment classification on natural reading tasks. We hypothesis that the human brain functions as a special text encoder and propose a novel framework leveraging pre-trained language models (e.g., BART). Our model achieves a 40.1% BLEU-1 score on EEG-To-Text decoding and a 55.6% F1 score on zero-shot EEG-based ternary sentiment classification, which significantly outperforms supervised baselines. Furthermore, we show that our proposed model can handle data from various subjects and sources, showing great potential for a high-performance open vocabulary brain-to-text system once sufficient data is availableComment: 9 pages, 2 figures, Thirty-Sixth AAAI Conference on Artificial Intelligence (AAAI2022

    Deep Neural Networks and Brain Alignment: Brain Encoding and Decoding (Survey)

    Full text link
    How does the brain represent different modes of information? Can we design a system that automatically understands what the user is thinking? Such questions can be answered by studying brain recordings like functional magnetic resonance imaging (fMRI). As a first step, the neuroscience community has contributed several large cognitive neuroscience datasets related to passive reading/listening/viewing of concept words, narratives, pictures and movies. Encoding and decoding models using these datasets have also been proposed in the past two decades. These models serve as additional tools for basic research in cognitive science and neuroscience. Encoding models aim at generating fMRI brain representations given a stimulus automatically. They have several practical applications in evaluating and diagnosing neurological conditions and thus also help design therapies for brain damage. Decoding models solve the inverse problem of reconstructing the stimuli given the fMRI. They are useful for designing brain-machine or brain-computer interfaces. Inspired by the effectiveness of deep learning models for natural language processing, computer vision, and speech, recently several neural encoding and decoding models have been proposed. In this survey, we will first discuss popular representations of language, vision and speech stimuli, and present a summary of neuroscience datasets. Further, we will review popular deep learning based encoding and decoding architectures and note their benefits and limitations. Finally, we will conclude with a brief summary and discussion about future trends. Given the large amount of recently published work in the `computational cognitive neuroscience' community, we believe that this survey nicely organizes the plethora of work and presents it as a coherent story.Comment: 16 pages, 10 figure

    EEG-based performance-driven adaptive automated hazard alerting system in security surveillance support

    Full text link
    Computer-vision technologies have emerged to assist security surveillance. However, automation alert/alarm systems often apply a low-beta threshold to avoid misses and generates excessive false alarms. This study proposed an adaptive hazard diagnosis and alarm system with adjustable alert threshold levels based on environmental scenarios and operator's hazard recognition performance. We recorded electroencephalogram (EEG) data during hazard recognition tasks. The linear ballistic accumulator model was used to decompose the response time into several psychological subcomponents, which were further estimated by a Markov chain Monte Carlo algorithm and compared among different types of hazardous scenarios. Participants were most cautious about falling hazards, followed by electricity hazards, and had the least conservative attitude toward structural hazards. Participants were classified into three performance-level subgroups using a latent profile analysis based on task accuracy. We applied the transfer learning paradigm to classify subgroups based on their time-frequency representations of EEG data. Additionally, two continual learning strategies were investigated to ensure a robust adaptation of the model to predict participants' performance levels in different hazardous scenarios. These findings can be leveraged in real-world brain-computer interface applications, which will provide human trust in automation and promote the successful implementation of alarm technologies

    Error Signals from the Brain: 7th Mismatch Negativity Conference

    Get PDF
    The 7th Mismatch Negativity Conference presents the state of the art in methods, theory, and application (basic and clinical research) of the MMN (and related error signals of the brain). Moreover, there will be two pre-conference workshops: one on the design of MMN studies and the analysis and interpretation of MMN data, and one on the visual MMN (with 20 presentations). There will be more than 40 presentations on hot topics of MMN grouped into thirteen symposia, and about 130 poster presentations. Keynote lectures by Kimmo Alho, Angela D. Friederici, and Israel Nelken will round off the program by covering topics related to and beyond MMN

    Transformer-based Self-supervised Multimodal Representation Learning for Wearable Emotion Recognition

    Full text link
    Recently, wearable emotion recognition based on peripheral physiological signals has drawn massive attention due to its less invasive nature and its applicability in real-life scenarios. However, how to effectively fuse multimodal data remains a challenging problem. Moreover, traditional fully-supervised based approaches suffer from overfitting given limited labeled data. To address the above issues, we propose a novel self-supervised learning (SSL) framework for wearable emotion recognition, where efficient multimodal fusion is realized with temporal convolution-based modality-specific encoders and a transformer-based shared encoder, capturing both intra-modal and inter-modal correlations. Extensive unlabeled data is automatically assigned labels by five signal transforms, and the proposed SSL model is pre-trained with signal transformation recognition as a pretext task, allowing the extraction of generalized multimodal representations for emotion-related downstream tasks. For evaluation, the proposed SSL model was first pre-trained on a large-scale self-collected physiological dataset and the resulting encoder was subsequently frozen or fine-tuned on three public supervised emotion recognition datasets. Ultimately, our SSL-based method achieved state-of-the-art results in various emotion classification tasks. Meanwhile, the proposed model proved to be more accurate and robust compared to fully-supervised methods on low data regimes.Comment: Accepted IEEE Transactions On Affective Computin
    • …
    corecore