43,143 research outputs found

    Markov models for fMRI correlation structure: is brain functional connectivity small world, or decomposable into networks?

    Get PDF
    Correlations in the signal observed via functional Magnetic Resonance Imaging (fMRI), are expected to reveal the interactions in the underlying neural populations through hemodynamic response. In particular, they highlight distributed set of mutually correlated regions that correspond to brain networks related to different cognitive functions. Yet graph-theoretical studies of neural connections give a different picture: that of a highly integrated system with small-world properties: local clustering but with short pathways across the complete structure. We examine the conditional independence properties of the fMRI signal, i.e. its Markov structure, to find realistic assumptions on the connectivity structure that are required to explain the observed functional connectivity. In particular we seek a decomposition of the Markov structure into segregated functional networks using decomposable graphs: a set of strongly-connected and partially overlapping cliques. We introduce a new method to efficiently extract such cliques on a large, strongly-connected graph. We compare methods learning different graph structures from functional connectivity by testing the goodness of fit of the model they learn on new data. We find that summarizing the structure as strongly-connected networks can give a good description only for very large and overlapping networks. These results highlight that Markov models are good tools to identify the structure of brain connectivity from fMRI signals, but for this purpose they must reflect the small-world properties of the underlying neural systems

    Generalized Network Psychometrics: Combining Network and Latent Variable Models

    Full text link
    We introduce the network model as a formal psychometric model, conceptualizing the covariance between psychometric indicators as resulting from pairwise interactions between observable variables in a network structure. This contrasts with standard psychometric models, in which the covariance between test items arises from the influence of one or more common latent variables. Here, we present two generalizations of the network model that encompass latent variable structures, establishing network modeling as parts of the more general framework of Structural Equation Modeling (SEM). In the first generalization, we model the covariance structure of latent variables as a network. We term this framework Latent Network Modeling (LNM) and show that, with LNM, a unique structure of conditional independence relationships between latent variables can be obtained in an explorative manner. In the second generalization, the residual variance-covariance structure of indicators is modeled as a network. We term this generalization Residual Network Modeling (RNM) and show that, within this framework, identifiable models can be obtained in which local independence is structurally violated. These generalizations allow for a general modeling framework that can be used to fit, and compare, SEM models, network models, and the RNM and LNM generalizations. This methodology has been implemented in the free-to-use software package lvnet, which contains confirmatory model testing as well as two exploratory search algorithms: stepwise search algorithms for low-dimensional datasets and penalized maximum likelihood estimation for larger datasets. We show in simulation studies that these search algorithms performs adequately in identifying the structure of the relevant residual or latent networks. We further demonstrate the utility of these generalizations in an empirical example on a personality inventory dataset.Comment: Published in Psychometrik

    Learning Large-Scale Bayesian Networks with the sparsebn Package

    Get PDF
    Learning graphical models from data is an important problem with wide applications, ranging from genomics to the social sciences. Nowadays datasets often have upwards of thousands---sometimes tens or hundreds of thousands---of variables and far fewer samples. To meet this challenge, we have developed a new R package called sparsebn for learning the structure of large, sparse graphical models with a focus on Bayesian networks. While there are many existing software packages for this task, this package focuses on the unique setting of learning large networks from high-dimensional data, possibly with interventions. As such, the methods provided place a premium on scalability and consistency in a high-dimensional setting. Furthermore, in the presence of interventions, the methods implemented here achieve the goal of learning a causal network from data. Additionally, the sparsebn package is fully compatible with existing software packages for network analysis.Comment: To appear in the Journal of Statistical Software, 39 pages, 7 figure

    Latent space models for multidimensional network data

    Get PDF
    Network data are any relational data recorded among a group of individuals, the nodes. When multiple relations are recorded among the same set of nodes, a more complex object arises, which we refer to as “multidimensional network”, or “multiplex”, where different relations corresponding to different networks. In the past, statistical analysis of networks has mainly focused on single-relation network data, referring to a single relation of interest. Only in recent years statistical models specifically tailored for multiplex data begun to be developed. In this context, only a few works have been introduced in the literature with the aim at extending the latent space modeling framework to multiplex data. Such framework postulates that nodes may be characterized by latent positions in a p-dimensional Euclidean space and that the presence/absence of an edge between any two nodes depends on such positions. When considering multidimensional network data, latent space models can help capture the associations between the nodes and summarize the observed structure in the different networks composing a multiplex. This dissertation discusses some latent space models for multidimensional network data, to account for different features that observed multiplex data may present. A first proposal allows to jointly represent the different networks into a single latent space, so that average similarities between the nodes may be captured as proximities in such space. A second work introduces a class of latent space models with node-specific effects, in order to deal with different degrees of heterogeneity within and between networks in multiplex data, corresponding to different types of node-specific behaviours. A third work addresses the issue of clustering of the nodes in the latent space, a frequently observed feature in many real world network and multidimensional network data. Here, clusters of nodes in the latent space correspond to communities of nodes in the multiplex. The proposed models are illustrated both via simulation studies and real world applications, to study their perfomances and abilities

    Estimating mutual information and multi--information in large networks

    Full text link
    We address the practical problems of estimating the information relations that characterize large networks. Building on methods developed for analysis of the neural code, we show that reliable estimates of mutual information can be obtained with manageable computational effort. The same methods allow estimation of higher order, multi--information terms. These ideas are illustrated by analyses of gene expression, financial markets, and consumer preferences. In each case, information theoretic measures correlate with independent, intuitive measures of the underlying structures in the system
    corecore