1,197 research outputs found

    A Comparative Study of Algorithms for Intra- and Inter-subjects fMRI Decoding

    Get PDF
    International audienceFunctional Magnetic Resonance Imaging (fMRI) provides a unique opportunity to study brain functional architecture, while being minimally invasive. Reverse inference, a.k.a. decoding, is a recent statistical analysis approach that has been used with success for deciphering activity patterns that are thought to fit the neuroscientific concept of population coding. Decoding relies on the selection of brain regions in which the observed activity is predictive of certain cognitive tasks. The accuracy of such a procedure is quantified by the prediction of the behavioral variable of interest - the target. In this paper, we discuss the optimality of decoding methods in two different settings, namely intra- and inter-subject kind of decoding. While inter-subject prediction aims at finding predictive regions that are stable across subjects, it is plagued by the additional inter-subject variability (lack of voxel-to-voxel correspondence), so that the best suited prediction algorithms used in reverse inference may not be the same in both cases. We benchmark different prediction algorithms in both intra- and inter-subjects analysis, and we show that using spatial regularization improves reverse inference in the challenging context of inter-subject prediction. Moreover, we also study the different maps of weights, and show that methods with similar accuracy may yield maps with very different spatial layout of the predictive regions

    Movies and meaning: from low-level features to mind reading

    Get PDF
    When dealing with movies, closing the tremendous discontinuity between low-level features and the richness of semantics in the viewers' cognitive processes, requires a variety of approaches and different perspectives. For instance when attempting to relate movie content to users' affective responses, previous work suggests that a direct mapping of audio-visual properties into elicited emotions is difficult, due to the high variability of individual reactions. To reduce the gap between the objective level of features and the subjective sphere of emotions, we exploit the intermediate representation of the connotative properties of movies: the set of shooting and editing conventions that help in transmitting meaning to the audience. One of these stylistic feature, the shot scale, i.e. the distance of the camera from the subject, effectively regulates theory of mind, indicating that increasing spatial proximity to the character triggers higher occurrence of mental state references in viewers' story descriptions. Movies are also becoming an important stimuli employed in neural decoding, an ambitious line of research within contemporary neuroscience aiming at "mindreading". In this field we address the challenge of producing decoding models for the reconstruction of perceptual contents by combining fMRI data and deep features in a hybrid model able to predict specific video object classes

    Does higher sampling rate (multiband + SENSE) improve group statistics - An example from social neuroscience block design at 3T

    Get PDF
    Multiband (MB) or Simultaneous multi-slice (SMS) acquisition schemes allow the acquisition of MRI signals from more than one spatial coordinate at a time. Commercial availability has brought this technique within the reach of many neuroscientists and psychologists. Most early evaluation of the performance of MB acquisition employed resting state fMRI or the most basic tasks. In this study, we tested whether the advantages of using MB acquisition schemes generalize to group analyses using a cognitive task more representative of typical cognitive neuroscience applications. Twenty-three subjects were scanned on a Philips 3 ​T scanner using five sequences, up to eight-fold acceleration with MB-factors 1 to 4, SENSE factors up to 2 and corresponding TRs of 2.45s down to 0.63s, while they viewed (i) movie blocks showing complex actions with hand object interactions and (ii) control movie blocks without hand object interaction. Data were processed using a widely used analysis pipeline implemented in SPM12 including the unified segmentation and canonical HRF modelling. Using random effects group-level, voxel-wise analysis we found that all sequences were able to detect the basic action observation network known to be recruited by our task. The highest t-values were found for sequences with MB4 acceleration. For the MB1 sequence, a 50% bigger voxel volume was needed to reach comparable t-statistics. The group-level t-values for resting state networks (RSNs) were also highest for MB4 sequences. Here the MB1 sequence with larger voxel size did not perform comparable to the MB4 sequence. Altogether, we can thus recommend the use of MB4 (and SENSE 1.5 or 2) on a Philips scanner when aiming to perform group-level analyses using cognitive block design fMRI tasks and voxel sizes in the range of cortical thickness (e.g. 2.7 ​mm isotropic). While results will not be dramatically changed by the use of multiband, our results suggest that MB will bring a moderate but significant benefit

    Scalable Machine Learning Methods for Massive Biomedical Data Analysis.

    Full text link
    Modern data acquisition techniques have enabled biomedical researchers to collect and analyze datasets of substantial size and complexity. The massive size of these datasets allows us to comprehensively study the biological system of interest at an unprecedented level of detail, which may lead to the discovery of clinically relevant biomarkers. Nonetheless, the dimensionality of these datasets presents critical computational and statistical challenges, as traditional statistical methods break down when the number of predictors dominates the number of observations, a setting frequently encountered in biomedical data analysis. This difficulty is compounded by the fact that biological data tend to be noisy and often possess complex correlation patterns among the predictors. The central goal of this dissertation is to develop a computationally tractable machine learning framework that allows us to extract scientifically meaningful information from these massive and highly complex biomedical datasets. We motivate the scope of our study by considering two important problems with clinical relevance: (1) uncertainty analysis for biomedical image registration, and (2) psychiatric disease prediction based on functional connectomes, which are high dimensional correlation maps generated from resting state functional MRI.PhDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/111354/1/takanori_1.pd

    A Rapid Segmentation-Insensitive "Digital Biopsy" Method for Radiomic Feature Extraction: Method and Pilot Study Using CT Images of Non-Small Cell Lung Cancer.

    Get PDF
    Quantitative imaging approaches compute features within images' regions of interest. Segmentation is rarely completely automatic, requiring time-consuming editing by experts. We propose a new paradigm, called "digital biopsy," that allows for the collection of intensity- and texture-based features from these regions at least 1 order of magnitude faster than the current manual or semiautomated methods. A radiologist reviewed automated segmentations of lung nodules from 100 preoperative volume computed tomography scans of patients with non-small cell lung cancer, and manually adjusted the nodule boundaries in each section, to be used as a reference standard, requiring up to 45 minutes per nodule. We also asked a different expert to generate a digital biopsy for each patient using a paintbrush tool to paint a contiguous region of each tumor over multiple cross-sections, a procedure that required an average of <3 minutes per nodule. We simulated additional digital biopsies using morphological procedures. Finally, we compared the features extracted from these digital biopsies with our reference standard using intraclass correlation coefficient (ICC) to characterize robustness. Comparing the reference standard segmentations to our digital biopsies, we found that 84/94 features had an ICC >0.7; comparing erosions and dilations, using a sphere of 1.5-mm radius, of our digital biopsies to the reference standard segmentations resulted in 41/94 and 53/94 features, respectively, with ICCs >0.7. We conclude that many intensity- and texture-based features remain consistent between the reference standard and our method while substantially reducing the amount of operator time required

    A Comprehensive Analysis of Multilayer Community Detection Algorithms for Application to EEG-Based Brain Networks

    Get PDF
    Modular organization is an emergent property of brain networks, responsible for shaping communication processes and underpinning brain functioning. Moreover, brain networks are intrinsically multilayer since their attributes can vary across time, subjects, frequency, or other domains. Identifying the modular structure in multilayer brain networks represents a gateway toward a deeper understanding of neural processes underlying cognition. Electroencephalographic (EEG) signals, thanks to their high temporal resolution, can give rise to multilayer networks able to follow the dynamics of brain activity. Despite this potential, the community organization has not yet been thoroughly investigated in brain networks estimated from EEG. Furthermore, at the state of the art, there is still no agreement about which algorithm is the most suitable to detect communities in multilayer brain networks, and a way to test and compare them all under a variety of conditions is lacking. In this work, we perform a comprehensive analysis of three algorithms at the state of the art for multilayer community detection (namely, genLouvain, DynMoga, and FacetNet) as compared with an approach based on the application of a single-layer clustering algorithm to each slice of the multilayer network. We test their ability to identify both steady and dynamic modular structures. We statistically evaluate their performances by means of ad hoc benchmark graphs characterized by properties covering a broad range of conditions in terms of graph density, number of clusters, noise level, and number of layers. The results of this simulation study aim to provide guidelines about the choice of the more appropriate algorithm according to the different properties of the brain network under examination. Finally, as a proof of concept, we show an application of the algorithms to real functional brain networks derived from EEG signals collected at rest with closed and open eyes. The test on real data provided results in agreement with the conclusions of the simulation study and confirmed the feasibility of multilayer analysis of EEG-based brain networks in both steady and dynamic conditions

    Neuroimaging Research: From Null-Hypothesis Falsification to Out-of-sample Generalization

    Get PDF
    International audienceBrain imaging technology has boosted the quantification of neurobiological phenomena underlying human mental operations and their disturbances. Since its inception, drawing inference on neurophysiological effects hinged on classical statistical methods, especially, the general linear model. The tens of thousands variables per brain scan were routinely tackled by independent statistical tests on each voxel. This circumvented the curse of dimensionality in exchange for neurobiologically imperfect observation units, a challenging multiple comparisons problem, and limited scaling to currently growing data repositories. Yet, the always-bigger information granularity of neuroimaging data repositories has lunched a rapidly increasing adoption of statistical learning algorithms. These scale naturally to high-dimensional data, extract models from data rather than prespecifying them, and are empirically evaluated for extrapolation to unseen data. The present paper portrays commonalities and differences between long-standing classical inference and upcoming generalization inference relevant for conducting neuroimaging research
    corecore