6 research outputs found
Whole MILC: generalizing learned dynamics across tasks, datasets, and populations
Behavioral changes are the earliest signs of a mental disorder, but arguably,
the dynamics of brain function gets affected even earlier. Subsequently,
spatio-temporal structure of disorder-specific dynamics is crucial for early
diagnosis and understanding the disorder mechanism. A common way of learning
discriminatory features relies on training a classifier and evaluating feature
importance. Classical classifiers, based on handcrafted features are quite
powerful, but suffer the curse of dimensionality when applied to large input
dimensions of spatio-temporal data. Deep learning algorithms could handle the
problem and a model introspection could highlight discriminatory
spatio-temporal regions but need way more samples to train. In this paper we
present a novel self supervised training schema which reinforces whole sequence
mutual information local to context (whole MILC). We pre-train the whole MILC
model on unlabeled and unrelated healthy control data. We test our model on
three different disorders (i) Schizophrenia (ii) Autism and (iii) Alzheimers
and four different studies. Our algorithm outperforms existing self-supervised
pre-training methods and provides competitive classification results to
classical machine learning algorithms. Importantly, whole MILC enables
attribution of subject diagnosis to specific spatio-temporal regions in the
fMRI signal.Comment: Accepted at MICCAI 2020. arXiv admin note: substantial text overlap
with arXiv:1912.0313
Self-supervised pretraining improves the performance of classification of task functional magnetic resonance imaging
IntroductionDecoding brain activities is one of the most popular topics in neuroscience in recent years. And deep learning has shown high performance in fMRI data classification and regression, but its requirement for large amounts of data conflicts with the high cost of acquiring fMRI data.MethodsIn this study, we propose an end-to-end temporal contrastive self-supervised learning algorithm, which learns internal spatiotemporal patterns within fMRI and allows the model to transfer learning to datasets of small size. For a given fMRI signal, we segmented it into three sections: the beginning, middle, and end. We then utilized contrastive learning by taking the end-middle (i.e., neighboring) pair as the positive pair, and the beginning-end (i.e., distant) pair as the negative pair.ResultsWe pretrained the model on 5 out of 7 tasks from the Human Connectome Project (HCP) and applied it in a downstream classification of the remaining two tasks. The pretrained model converged on data from 12 subjects, while a randomly initialized model required 100 subjects. We then transferred the pretrained model to a dataset containing unpreprocessed whole-brain fMRI from 30 participants, achieving an accuracy of 80.2 ± 4.7%, while the randomly initialized model failed to converge. We further validated the model’s performance on the Multiple Domain Task Dataset (MDTB), which contains fMRI data of 26 tasks from 24 participants. Thirteen tasks of fMRI were selected as inputs, and the results showed that the pre-trained model succeeded in classifying 11 of the 13 tasks. When using the 7 brain networks as input, variations of the performance were observed, with the visual network performed as well as whole brain inputs, while the limbic network almost failed in all 13 tasks.DiscussionOur results demonstrated the potential of self-supervised learning for fMRI analysis with small datasets and unpreprocessed data, and for analysis of the correlation between regional fMRI activity and cognitive tasks