12,305 research outputs found

    Generative models of brain connectivity for population studies

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.Cataloged from PDF version of thesis.Includes bibliographical references (p. 131-139).Connectivity analysis focuses on the interaction between brain regions. Such relationships inform us about patterns of neural communication and may enhance our understanding of neurological disorders. This thesis proposes a generative framework that uses anatomical and functional connectivity information to find impairments within a clinical population. Anatomical connectivity is measured via Diffusion Weighted Imaging (DWI), and functional connectivity is assessed using resting-state functional Magnetic Resonance Imaging (fMRI). We first develop a probabilistic model to merge information from DWI tractography and resting-state fMRI correlations. Our formulation captures the interaction between hidden templates of anatomical and functional connectivity within the brain. We also present an intuitive extension to population studies and demonstrate that our model learns predictive differences between a control and a schizophrenia population. Furthermore, combining the two modalities yields better results than considering each one in isolation. Although our joint model identifies widespread connectivity patterns influenced by a neurological disorder, the results are difficult to interpret and integrate with our regioncentric knowledge of the brain. To alleviate this problem, we present a novel approach to identify regions associated with the disorder based on connectivity information. Specifically, we assume that impairments of the disorder localize to a small subset of brain regions, which we call disease foci, and affect neural communication to/from these regions. This allows us to aggregate pairwise connectivity changes into a region-based representation of the disease. Once again, we use a probabilistic formulation: latent variables specify a template organization of the brain, which we indirectly observe through resting-state fMRI correlations and DWI tractography. Our inference algorithm simultaneously identifies both the afflicted regions and the network of aberrant functional connectivity. Finally, we extend the region-based model to include multiple collections of foci, which we call disease clusters. Preliminary results suggest that as the number of clusters increases, the refined model explains progressively more of the functional differences between the populations.by Archana Venkataraman.Ph.D

    A group model for stable multi-subject ICA on fMRI datasets

    Get PDF
    Spatial Independent Component Analysis (ICA) is an increasingly used data-driven method to analyze functional Magnetic Resonance Imaging (fMRI) data. To date, it has been used to extract sets of mutually correlated brain regions without prior information on the time course of these regions. Some of these sets of regions, interpreted as functional networks, have recently been used to provide markers of brain diseases and open the road to paradigm-free population comparisons. Such group studies raise the question of modeling subject variability within ICA: how can the patterns representative of a group be modeled and estimated via ICA for reliable inter-group comparisons? In this paper, we propose a hierarchical model for patterns in multi-subject fMRI datasets, akin to mixed-effect group models used in linear-model-based analysis. We introduce an estimation procedure, CanICA (Canonical ICA), based on i) probabilistic dimension reduction of the individual data, ii) canonical correlation analysis to identify a data subspace common to the group iii) ICA-based pattern extraction. In addition, we introduce a procedure based on cross-validation to quantify the stability of ICA patterns at the level of the group. We compare our method with state-of-the-art multi-subject fMRI ICA methods and show that the features extracted using our procedure are more reproducible at the group level on two datasets of 12 healthy controls: a resting-state and a functional localizer study

    Network Plasticity as Bayesian Inference

    Full text link
    General results from statistical learning theory suggest to understand not only brain computations, but also brain plasticity as probabilistic inference. But a model for that has been missing. We propose that inherently stochastic features of synaptic plasticity and spine motility enable cortical networks of neurons to carry out probabilistic inference by sampling from a posterior distribution of network configurations. This model provides a viable alternative to existing models that propose convergence of parameters to maximum likelihood values. It explains how priors on weight distributions and connection probabilities can be merged optimally with learned experience, how cortical networks can generalize learned information so well to novel experiences, and how they can compensate continuously for unforeseen disturbances of the network. The resulting new theory of network plasticity explains from a functional perspective a number of experimental data on stochastic aspects of synaptic plasticity that previously appeared to be quite puzzling.Comment: 33 pages, 5 figures, the supplement is available on the author's web page http://www.igi.tugraz.at/kappe

    Metrics for Graph Comparison: A Practitioner's Guide

    Full text link
    Comparison of graph structure is a ubiquitous task in data analysis and machine learning, with diverse applications in fields such as neuroscience, cyber security, social network analysis, and bioinformatics, among others. Discovery and comparison of structures such as modular communities, rich clubs, hubs, and trees in data in these fields yields insight into the generative mechanisms and functional properties of the graph. Often, two graphs are compared via a pairwise distance measure, with a small distance indicating structural similarity and vice versa. Common choices include spectral distances (also known as λ\lambda distances) and distances based on node affinities. However, there has of yet been no comparative study of the efficacy of these distance measures in discerning between common graph topologies and different structural scales. In this work, we compare commonly used graph metrics and distance measures, and demonstrate their ability to discern between common topological features found in both random graph models and empirical datasets. We put forward a multi-scale picture of graph structure, in which the effect of global and local structure upon the distance measures is considered. We make recommendations on the applicability of different distance measures to empirical graph data problem based on this multi-scale view. Finally, we introduce the Python library NetComp which implements the graph distances used in this work

    Nonparametric Bayes Modeling of Populations of Networks

    Full text link
    Replicated network data are increasingly available in many research fields. In connectomic applications, inter-connections among brain regions are collected for each patient under study, motivating statistical models which can flexibly characterize the probabilistic generative mechanism underlying these network-valued data. Available models for a single network are not designed specifically for inference on the entire probability mass function of a network-valued random variable and therefore lack flexibility in characterizing the distribution of relevant topological structures. We propose a flexible Bayesian nonparametric approach for modeling the population distribution of network-valued data. The joint distribution of the edges is defined via a mixture model which reduces dimensionality and efficiently incorporates network information within each mixture component by leveraging latent space representations. The formulation leads to an efficient Gibbs sampler and provides simple and coherent strategies for inference and goodness-of-fit assessments. We provide theoretical results on the flexibility of our model and illustrate improved performance --- compared to state-of-the-art models --- in simulations and application to human brain networks
    corecore