14,592 research outputs found
The shuffle estimator for explainable variance in fMRI experiments
In computational neuroscience, it is important to estimate well the
proportion of signal variance in the total variance of neural activity
measurements. This explainable variance measure helps neuroscientists assess
the adequacy of predictive models that describe how images are encoded in the
brain. Complicating the estimation problem are strong noise correlations, which
may confound the neural responses corresponding to the stimuli. If not properly
taken into account, the correlations could inflate the explainable variance
estimates and suggest false possible prediction accuracies. We propose a novel
method to estimate the explainable variance in functional MRI (fMRI) brain
activity measurements when there are strong correlations in the noise. Our
shuffle estimator is nonparametric, unbiased, and built upon the random effect
model reflecting the randomization in the fMRI data collection process.
Leveraging symmetries in the measurements, our estimator is obtained by
appropriately permuting the measurement vector in such a way that the noise
covariance structure is intact but the explainable variance is changed after
the permutation. This difference is then used to estimate the explainable
variance. We validate the properties of the proposed method in simulation
experiments. For the image-fMRI data, we show that the shuffle estimates can
explain the variation in prediction accuracy for voxels within the primary
visual cortex (V1) better than alternative parametric methods.Comment: Published in at http://dx.doi.org/10.1214/13-AOAS681 the Annals of
Applied Statistics (http://www.imstat.org/aoas/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Machine Learning for Neuroimaging with Scikit-Learn
Statistical machine learning methods are increasingly used for neuroimaging
data analysis. Their main virtue is their ability to model high-dimensional
datasets, e.g. multivariate analysis of activation images or resting-state time
series. Supervised learning is typically used in decoding or encoding settings
to relate brain images to behavioral or clinical observations, while
unsupervised learning can uncover hidden structures in sets of images (e.g.
resting state functional MRI) or find sub-populations in large cohorts. By
considering different functional neuroimaging applications, we illustrate how
scikit-learn, a Python machine learning library, can be used to perform some
key analysis steps. Scikit-learn contains a very large set of statistical
learning algorithms, both supervised and unsupervised, and its application to
neuroimaging data provides a versatile tool to study the brain.Comment: Frontiers in neuroscience, Frontiers Research Foundation, 2013, pp.1
Recommended from our members
Review of Unbiased FIR Filters, Smoothers, and Predictors for Polynomial Signals
Extracting an estimate of a slowly varying signal corrupted by noise is a common task. Examples can be found in industrial, scientific and biomedical instrumentation. Depending on the nature of the application the signal estimate is allowed to be a delayed estimate of the original signal or, in the other extreme, no delay is tolerated. These cases are commonly referred to as filtering, prediction, and smoothing depending on the amount of advance or lag between the input data set and the output data set. In this review paper we provide a comprehensive set of design and analysis tools for designing unbiased FIR filters, predictors, and smoothers for slowly varying signals, i.e. signals that can be modeled by low order polynomials. Explicit expressions of parameters needed in practical implementations are given. Real life examples are provided including cases where the method is extended to signals that are piecewise slowly varying. A critical view on recursive implementations of the algorithms is provided
A comparison of methods for gravitational wave burst searches from LIGO and Virgo
The search procedure for burst gravitational waves has been studied using 24
hours of simulated data in a network of three interferometers (Hanford 4-km,
Livingston 4-km and Virgo 3-km are the example interferometers). Several
methods to detect burst events developed in the LIGO Scientific Collaboration
(LSC) and Virgo collaboration have been studied and compared. We have performed
coincidence analysis of the triggers obtained in the different interferometers
with and without simulated signals added to the data. The benefits of having
multiple interferometers of similar sensitivity are demonstrated by comparing
the detection performance of the joint coincidence analysis with LSC and Virgo
only burst searches. Adding Virgo to the LIGO detector network can increase by
50% the detection efficiency for this search. Another advantage of a joint
LIGO-Virgo network is the ability to reconstruct the source sky position. The
reconstruction accuracy depends on the timing measurement accuracy of the
events in each interferometer, and is displayed in this paper with a fixed
source position example.Comment: LIGO-Virgo working group submitted to PR
Two Procedures for Robust Monitoring of Probability Distributions of Economic Data Streams induced by Depth Functions
Data streams (streaming data) consist of transiently observed, evolving in
time, multidimensional data sequences that challenge our computational and/or
inferential capabilities. In this paper we propose user friendly approaches for
robust monitoring of selected properties of unconditional and conditional
distribution of the stream basing on depth functions. Our proposals are robust
to a small fraction of outliers and/or inliers but sensitive to a regime change
of the stream at the same time. Their implementations are available in our free
R package DepthProc.Comment: Operations Research and Decisions, vol. 25, No. 1, 201
- …