8,455 research outputs found
Brain-mediated Transfer Learning of Convolutional Neural Networks
The human brain can effectively learn a new task from a small number of
samples, which indicate that the brain can transfer its prior knowledge to
solve tasks in different domains. This function is analogous to transfer
learning (TL) in the field of machine learning. TL uses a well-trained feature
space in a specific task domain to improve performance in new tasks with
insufficient training data. TL with rich feature representations, such as
features of convolutional neural networks (CNNs), shows high generalization
ability across different task domains. However, such TL is still insufficient
in making machine learning attain generalization ability comparable to that of
the human brain. To examine if the internal representation of the brain could
be used to achieve more efficient TL, we introduce a method for TL mediated by
human brains. Our method transforms feature representations of audiovisual
inputs in CNNs into those in activation patterns of individual brains via their
association learned ahead using measured brain responses. Then, to estimate
labels reflecting human cognition and behavior induced by the audiovisual
inputs, the transformed representations are used for TL. We demonstrate that
our brain-mediated TL (BTL) shows higher performance in the label estimation
than the standard TL. In addition, we illustrate that the estimations mediated
by different brains vary from brain to brain, and the variability reflects the
individual variability in perception. Thus, our BTL provides a framework to
improve the generalization ability of machine-learning feature representations
and enable machine learning to estimate human-like cognition and behavior,
including individual variability
Decoding EEG brain activity for multi-modal natural language processing
Until recently, human behavioral data from reading has mainly been of
interest to researchers to understand human cognition. However, these human
language processing signals can also be beneficial in machine learning-based
natural language processing tasks. Using EEG brain activity to this purpose is
largely unexplored as of yet. In this paper, we present the first large-scale
study of systematically analyzing the potential of EEG brain activity data for
improving natural language processing tasks, with a special focus on which
features of the signal are most beneficial. We present a multi-modal machine
learning architecture that learns jointly from textual input as well as from
EEG features. We find that filtering the EEG signals into frequency bands is
more beneficial than using the broadband signal. Moreover, for a range of word
embedding types, EEG data improves binary and ternary sentiment classification
and outperforms multiple baselines. For more complex tasks such as relation
detection, further research is needed. Finally, EEG data shows to be
particularly promising when limited training data is available
Decoding information in the human hippocampus: a user's guide
Multi-voxel pattern analysis (MVPA), or 'decoding', of fMRI activity has gained popularity in the neuroimaging community in recent years. MVPA differs from standard fMRI analyses by focusing on whether information relating to specific stimuli is encoded in patterns of activity across multiple voxels. If a stimulus can be predicted, or decoded, solely from the pattern of fMRI activity, it must mean there is information about that stimulus represented in the brain region where the pattern across voxels was identified. This ability to examine the representation of information relating to specific stimuli (e.g., memories) in particular brain areas makes MVPA an especially suitable method for investigating memory representations in brain structures such as the hippocampus. This approach could open up new opportunities to examine hippocampal representations in terms of their content, and how they might change over time, with aging, and pathology. Here we consider published MVPA studies that specifically focused on the hippocampus, and use them to illustrate the kinds of novel questions that can be addressed using MVPA. We then discuss some of the conceptual and methodological challenges that can arise when implementing MVPA in this context. Overall, we hope to highlight the potential utility of MVPA, when appropriately deployed, and provide some initial guidance to those considering MVPA as a means to investigate the hippocampus
Seeing through the Brain: Image Reconstruction of Visual Perception from Human Brain Signals
Seeing is believing, however, the underlying mechanism of how human visual
perceptions are intertwined with our cognitions is still a mystery. Thanks to
the recent advances in both neuroscience and artificial intelligence, we have
been able to record the visually evoked brain activities and mimic the visual
perception ability through computational approaches. In this paper, we pay
attention to visual stimuli reconstruction by reconstructing the observed
images based on portably accessible brain signals, i.e., electroencephalography
(EEG) data. Since EEG signals are dynamic in the time-series format and are
notorious to be noisy, processing and extracting useful information requires
more dedicated efforts; In this paper, we propose a comprehensive pipeline,
named NeuroImagen, for reconstructing visual stimuli images from EEG signals.
Specifically, we incorporate a novel multi-level perceptual information
decoding to draw multi-grained outputs from the given EEG data. A latent
diffusion model will then leverage the extracted information to reconstruct the
high-resolution visual stimuli images. The experimental results have
illustrated the effectiveness of image reconstruction and superior quantitative
performance of our proposed method.Comment: A preprint version of an ongoing wor
Functional magnetic resonance imaging-based brain decoding with visual semantic model
The activity pattern of the brain has been activated to identify a person in mind. Using the function magnetic resonance imaging (fMRI) to decipher brain decoding is the most accepted method. However, the accuracy of fMRI-based brain decoder is still restricted due to limited training samples. The limitations of the brain decoder using fMRI are passed through the design features proposed for many label coding and model training to predict these characteristics for a particular label. Moreover, what kind of semantic features for deciphering the neurological activity patterns are unclear. In current work, a new calculation model for learning decoding labels that is consistent with fMRI activity responses. The approach demonstrates the proposed corresponding label's success in terms of accuracy, which is decoded from brain activity patterns and compared with conventional text-derived feature technique. Besides, experimental studies present a training model based on multi-tasking to reduce the problems of limited training data sets. Therefore, the multi-task learning model is more efficient than modern methods of calculation, and decoding features may be easily obtained
- …