36,760 research outputs found

    Construction of embedded fMRI resting state functional connectivity networks using manifold learning

    Full text link
    We construct embedded functional connectivity networks (FCN) from benchmark resting-state functional magnetic resonance imaging (rsfMRI) data acquired from patients with schizophrenia and healthy controls based on linear and nonlinear manifold learning algorithms, namely, Multidimensional Scaling (MDS), Isometric Feature Mapping (ISOMAP) and Diffusion Maps. Furthermore, based on key global graph-theoretical properties of the embedded FCN, we compare their classification potential using machine learning techniques. We also assess the performance of two metrics that are widely used for the construction of FCN from fMRI, namely the Euclidean distance and the lagged cross-correlation metric. We show that the FCN constructed with Diffusion Maps and the lagged cross-correlation metric outperform the other combinations

    Mapping hybrid functional-structural connectivity traits in the human connectome

    Get PDF
    One of the crucial questions in neuroscience is how a rich functional repertoire of brain states relates to its underlying structural organization. How to study the associations between these structural and functional layers is an open problem that involves novel conceptual ways of tackling this question. We here propose an extension of the Connectivity Independent Component Analysis (connICA) framework, to identify joint structural-functional connectivity traits. Here, we extend connICA to integrate structural and functional connectomes by merging them into common hybrid connectivity patterns that represent the connectivity fingerprint of a subject. We test this extended approach on the 100 unrelated subjects from the Human Connectome Project. The method is able to extract main independent structural-functional connectivity patterns from the entire cohort that are sensitive to the realization of different tasks. The hybrid connICA extracted two main task-sensitive hybrid traits. The first, encompassing the within and between connections of dorsal attentional and visual areas, as well as fronto-parietal circuits. The second, mainly encompassing the connectivity between visual, attentional, DMN and subcortical networks. Overall, these findings confirms the potential ofthe hybrid connICA for the compression of structural/functional connectomes into integrated patterns from a set of individual brain networks.Comment: article: 34 pages, 4 figures; supplementary material: 5 pages, 5 figure

    A group model for stable multi-subject ICA on fMRI datasets

    Get PDF
    Spatial Independent Component Analysis (ICA) is an increasingly used data-driven method to analyze functional Magnetic Resonance Imaging (fMRI) data. To date, it has been used to extract sets of mutually correlated brain regions without prior information on the time course of these regions. Some of these sets of regions, interpreted as functional networks, have recently been used to provide markers of brain diseases and open the road to paradigm-free population comparisons. Such group studies raise the question of modeling subject variability within ICA: how can the patterns representative of a group be modeled and estimated via ICA for reliable inter-group comparisons? In this paper, we propose a hierarchical model for patterns in multi-subject fMRI datasets, akin to mixed-effect group models used in linear-model-based analysis. We introduce an estimation procedure, CanICA (Canonical ICA), based on i) probabilistic dimension reduction of the individual data, ii) canonical correlation analysis to identify a data subspace common to the group iii) ICA-based pattern extraction. In addition, we introduce a procedure based on cross-validation to quantify the stability of ICA patterns at the level of the group. We compare our method with state-of-the-art multi-subject fMRI ICA methods and show that the features extracted using our procedure are more reproducible at the group level on two datasets of 12 healthy controls: a resting-state and a functional localizer study

    DTI denoising for data with low signal to noise ratios

    Get PDF
    Low signal to noise ratio (SNR) experiments in diffusion tensor imaging (DTI) give key information about tracking and anisotropy, e. g., by measurements with small voxel sizes or with high b values. However, due to the complicated and dominating impact of thermal noise such data are still seldom analysed. In this paper Monte Carlo simulations are presented which investigate the distributions of noise for different DTI variables in low SNR situations. Based on this study a strategy for the application of spatial smoothing is derived. Optimal prerequisites for spatial filters are unbiased, bell shaped distributions with uniform variance, but, only few variables have a statistics close to that. To construct a convenient filter a chain of nonlinear Gaussian filters is adapted to peculiarities of DTI and a bias correction is introduced. This edge preserving three dimensional filter is then validated via a quasi realistic model. Further, it is shown that for small sample sizes the filter is as effective as a maximum likelihood estimator and produces reliable results down to a local SNR of approximately 1. The filter is finally applied to very recent data with isotropic voxels of the size 1Ɨ1Ɨ1mm^3 which corresponds to a spatially mean SNR of 2.5. This application demonstrates the statistical robustness of the filter method. Though the Rician noise model is only approximately realized in the data, the gain of information by spatial smoothing is considerable

    Challenges of Big Data Analysis

    Full text link
    Big Data bring new opportunities to modern society and challenges to data scientists. On one hand, Big Data hold great promises for discovering subtle population patterns and heterogeneities that are not possible with small-scale data. On the other hand, the massive sample size and high dimensionality of Big Data introduce unique computational and statistical challenges, including scalability and storage bottleneck, noise accumulation, spurious correlation, incidental endogeneity, and measurement errors. These challenges are distinguished and require new computational and statistical paradigm. This article give overviews on the salient features of Big Data and how these features impact on paradigm change on statistical and computational methods as well as computing architectures. We also provide various new perspectives on the Big Data analysis and computation. In particular, we emphasis on the viability of the sparsest solution in high-confidence set and point out that exogeneous assumptions in most statistical methods for Big Data can not be validated due to incidental endogeneity. They can lead to wrong statistical inferences and consequently wrong scientific conclusions
    • ā€¦
    corecore