2,398 research outputs found
MEG Decoding Across Subjects
Brain decoding is a data analysis paradigm for neuroimaging experiments that
is based on predicting the stimulus presented to the subject from the
concurrent brain activity. In order to make inference at the group level, a
straightforward but sometimes unsuccessful approach is to train a classifier on
the trials of a group of subjects and then to test it on unseen trials from new
subjects. The extreme difficulty is related to the structural and functional
variability across the subjects. We call this approach "decoding across
subjects". In this work, we address the problem of decoding across subjects for
magnetoencephalographic (MEG) experiments and we provide the following
contributions: first, we formally describe the problem and show that it belongs
to a machine learning sub-field called transductive transfer learning (TTL).
Second, we propose to use a simple TTL technique that accounts for the
differences between train data and test data. Third, we propose the use of
ensemble learning, and specifically of stacked generalization, to address the
variability across subjects within train data, with the aim of producing more
stable classifiers. On a face vs. scramble task MEG dataset of 16 subjects, we
compare the standard approach of not modelling the differences across subjects,
to the proposed one of combining TTL and ensemble learning. We show that the
proposed approach is consistently more accurate than the standard one
A supervised clustering approach for fMRI-based inference of brain states
We propose a method that combines signals from many brain regions observed in
functional Magnetic Resonance Imaging (fMRI) to predict the subject's behavior
during a scanning session. Such predictions suffer from the huge number of
brain regions sampled on the voxel grid of standard fMRI data sets: the curse
of dimensionality. Dimensionality reduction is thus needed, but it is often
performed using a univariate feature selection procedure, that handles neither
the spatial structure of the images, nor the multivariate nature of the signal.
By introducing a hierarchical clustering of the brain volume that incorporates
connectivity constraints, we reduce the span of the possible spatial
configurations to a single tree of nested regions tailored to the signal. We
then prune the tree in a supervised setting, hence the name supervised
clustering, in order to extract a parcellation (division of the volume) such
that parcel-based signal averages best predict the target information.
Dimensionality reduction is thus achieved by feature agglomeration, and the
constructed features now provide a multi-scale representation of the signal.
Comparisons with reference methods on both simulated and real data show that
our approach yields higher prediction accuracy than standard voxel-based
approaches. Moreover, the method infers an explicit weighting of the regions
involved in the regression or classification task
Machine Learning for Neuroimaging with Scikit-Learn
Statistical machine learning methods are increasingly used for neuroimaging
data analysis. Their main virtue is their ability to model high-dimensional
datasets, e.g. multivariate analysis of activation images or resting-state time
series. Supervised learning is typically used in decoding or encoding settings
to relate brain images to behavioral or clinical observations, while
unsupervised learning can uncover hidden structures in sets of images (e.g.
resting state functional MRI) or find sub-populations in large cohorts. By
considering different functional neuroimaging applications, we illustrate how
scikit-learn, a Python machine learning library, can be used to perform some
key analysis steps. Scikit-learn contains a very large set of statistical
learning algorithms, both supervised and unsupervised, and its application to
neuroimaging data provides a versatile tool to study the brain.Comment: Frontiers in neuroscience, Frontiers Research Foundation, 2013, pp.1
Incorporating structured assumptions with probabilistic graphical models in fMRI data analysis
With the wide adoption of functional magnetic resonance imaging (fMRI) by
cognitive neuroscience researchers, large volumes of brain imaging data have
been accumulated in recent years. Aggregating these data to derive scientific
insights often faces the challenge that fMRI data are high-dimensional,
heterogeneous across people, and noisy. These challenges demand the development
of computational tools that are tailored both for the neuroscience questions
and for the properties of the data. We review a few recently developed
algorithms in various domains of fMRI research: fMRI in naturalistic tasks,
analyzing full-brain functional connectivity, pattern classification, inferring
representational similarity and modeling structured residuals. These algorithms
all tackle the challenges in fMRI similarly: they start by making clear
statements of assumptions about neural data and existing domain knowledge,
incorporating those assumptions and domain knowledge into probabilistic
graphical models, and using those models to estimate properties of interest or
latent structures in the data. Such approaches can avoid erroneous findings,
reduce the impact of noise, better utilize known properties of the data, and
better aggregate data across groups of subjects. With these successful cases,
we advocate wider adoption of explicit model construction in cognitive
neuroscience. Although we focus on fMRI, the principle illustrated here is
generally applicable to brain data of other modalities.Comment: update with the version accepted by Neuropsychologi
- …