2,362 research outputs found
Improving whole-brain neural decoding of fMRI with domain adaptation
In neural decoding, there has been a growing interest in machine learning on whole-brain functional magnetic resonance imaging (fMRI). However, the size discrepancy between the feature space and the training set poses serious challenges. Simply increasing the number of training examples is infeasible and costly. In this paper, we proposed a domain adaptation framework for whole-brain fMRI (DawfMRI) to improve whole-brain neural decoding on target data leveraging pre-existing source data. DawfMRI consists of three steps: 1) feature extraction from whole-brain fMRI, 2) source and target feature adaptation, and 3) source and target classifier adaptation. We evaluated its eight possible variations, including two non-adaptation and six adaptation algorithms, using a collection of seven task-based fMRI datasets (129 unique subjects and 11 cognitive tasks in total) from the OpenNeuro project. The results demonstrated that appropriate source domain can help improve neural decoding accuracy for challenging classification tasks. The best-case improvement is 8.94% (from 78.64% to 87.58%). Moreover, we discovered a plausible relationship between psychological similarity and adaptation effectiveness. Finally, visualizing and interpreting voxel weights showed that the adaptation can provide additional insights into neural decoding
A neural marker for social bias toward in-group accents
Accents provide information about the speaker's geographical, socio-economic, and ethnic background. Research in applied psychology and sociolinguistics suggests that we generally prefer our own accent to other varieties of our native language and attribute more positive traits to it. Despite the widespread influence of accents on social interactions, educational and work settings the neural underpinnings of this social bias toward our own accent and, what may drive this bias, are unexplored. We measured brain activity while participants from two different geographical backgrounds listened passively to 3 English accent types embedded in an adaptation design. Cerebral activity in several regions, including bilateral amygdalae, revealed a significant interaction between the participants' own accent and the accent they listened to: while repetition of own accents elicited an enhanced neural response, repetition of the other group's accent resulted in reduced responses classically associated with adaptation. Our findings suggest that increased social relevance of, or greater emotional sensitivity to in-group accents, may underlie the own-accent bias. Our results provide a neural marker for the bias associated with accents, and show, for the first time, that the neural response to speech is partly shaped by the geographical background of the listener
From voxels to pixels and back: Self-supervision in natural-image reconstruction from fMRI
Reconstructing observed images from fMRI brain recordings is challenging.
Unfortunately, acquiring sufficient "labeled" pairs of {Image, fMRI} (i.e.,
images with their corresponding fMRI responses) to span the huge space of
natural images is prohibitive for many reasons. We present a novel approach
which, in addition to the scarce labeled data (training pairs), allows to train
fMRI-to-image reconstruction networks also on "unlabeled" data (i.e., images
without fMRI recording, and fMRI recording without images). The proposed model
utilizes both an Encoder network (image-to-fMRI) and a Decoder network
(fMRI-to-image). Concatenating these two networks back-to-back (Encoder-Decoder
& Decoder-Encoder) allows augmenting the training with both types of unlabeled
data. Importantly, it allows training on the unlabeled test-fMRI data. This
self-supervision adapts the reconstruction network to the new input test-data,
despite its deviation from the statistics of the scarce training data.Comment: *First two authors contributed equally. NeurIPS 201
Expertise with non-speech 'auditory Greebles' recruits speech-sensitive cortical regions
Regions of the human temporal lobe show greater activation for speech than for other sounds. These differences may reflect intrinsically specialized domain-specific adaptations for processing speech, or they may be driven by the significant expertise we have in listening to the speech signal. To test the expertise hypothesis, we used a video-game-based paradigm that tacitly trained listeners to categorize acoustically complex, artificial nonlinguistic sounds. Before and after training, we used functional MRI to
measure how expertise with these sounds modulated temporal lobe activation. Participants’ ability to explicitly categorize the nonspeech sounds predicted the change in pretraining to posttraining activation in speech-sensitive regions of the left posterior superior temporal sulcus, suggesting that emergent auditory expertise may help drive this functional regionalization. Thus, seemingly domain-specific patterns of neural activation in higher cortical regions may be driven in part by experience-based
restructuring of high-dimensional perceptual space
Review: Object vision in a structured world
In natural vision, objects appear at typical locations, both with respect to visual space (e.g., an airplane in the upper part of a scene) and other objects (e.g., a lamp above a table). Recent studies have shown that object vision is strongly adapted to such positional regularities. In this review we synthesize these developments, highlighting that adaptations to positional regularities facilitate object detection and recognition, and sharpen the representations of objects in visual cortex. These effects are pervasive across various types of high-level content. We posit that adaptations to real-world structure collectively support optimal usage of limited cortical processing resources. Taking positional regularities into account will thus be essential for understanding efficient object vision in the real world
"Task-relevant autoencoding" enhances machine learning for human neuroscience
In human neuroscience, machine learning can help reveal lower-dimensional
neural representations relevant to subjects' behavior. However,
state-of-the-art models typically require large datasets to train, so are prone
to overfitting on human neuroimaging data that often possess few samples but
many input dimensions. Here, we capitalized on the fact that the features we
seek in human neuroscience are precisely those relevant to subjects' behavior.
We thus developed a Task-Relevant Autoencoder via Classifier Enhancement
(TRACE), and tested its ability to extract behaviorally-relevant, separable
representations compared to a standard autoencoder, a variational autoencoder,
and principal component analysis for two severely truncated machine learning
datasets. We then evaluated all models on fMRI data from 59 subjects who
observed animals and objects. TRACE outperformed all models nearly
unilaterally, showing up to 12% increased classification accuracy and up to 56%
improvement in discovering "cleaner", task-relevant representations. These
results showcase TRACE's potential for a wide variety of data related to human
behavior.Comment: 41 pages, 11 figures, 5 tables including supplemental materia
- …