77 research outputs found
Partial covariance based functional connectivity computation using Ledoit-Wolf covariance regularization
Highlights •We use the well characterized matrix regularization technique described by Ledoit and Wolf to calculate high dimensional partial correlations in fMRI data. •Using this approach we demonstrate that partial correlations reveal RSN structure suggesting that RSNs are defined by widely and uniquely shared variance. •Partial correlation functional connectivity is sensitive to changes in brain state indicating that they contain functional information. Functional connectivity refers to shared signals among brain regions and is typically assessed in a task free state. Functional connectivity commonly is quantified between signal pairs using Pearson correlation. However, resting-state fMRI is a multivariate process exhibiting a complicated covariance structure. Partial covariance assesses the unique variance shared between two brain regions excluding any widely shared variance, hence is appropriate for the analysis of multivariate fMRI datasets. However, calculation of partial covariance requires inversion of the covariance matrix, which, in most functional connectivity studies, is not invertible owing to rank deficiency. Here we apply Ledoit–Wolf shrinkage (L2 regularization) to invert the high dimensional BOLD covariance matrix. We investigate the network organization and brain-state dependence of partial covariance-based functional connectivity. Although RSNs are conventionally defined in terms of shared variance, removal of widely shared variance, surprisingly, improved the separation of RSNs in a spring embedded graphical model. This result suggests that pair-wise unique shared variance plays a heretofore unrecognized role in RSN covariance organization. In addition, application of partial correlation to fMRI data acquired in the eyes open vs. eyes closed states revealed focal changes in uniquely shared variance between the thalamus and visual cortices. This result suggests that partial correlation of resting state BOLD time series reflect functional processes in addition to structural connectivity
Learning to Discover Sparse Graphical Models
We consider structure discovery of undirected graphical models from
observational data. Inferring likely structures from few examples is a complex
task often requiring the formulation of priors and sophisticated inference
procedures. Popular methods rely on estimating a penalized maximum likelihood
of the precision matrix. However, in these approaches structure recovery is an
indirect consequence of the data-fit term, the penalty can be difficult to
adapt for domain-specific knowledge, and the inference is computationally
demanding. By contrast, it may be easier to generate training samples of data
that arise from graphs with the desired structure properties. We propose here
to leverage this latter source of information as training data to learn a
function, parametrized by a neural network that maps empirical covariance
matrices to estimated graph structures. Learning this function brings two
benefits: it implicitly models the desired structure or sparsity properties to
form suitable priors, and it can be tailored to the specific problem of edge
structure discovery, rather than maximizing data likelihood. Applying this
framework, we find our learnable graph-discovery method trained on synthetic
data generalizes well: identifying relevant edges in both synthetic and real
data, completely unknown at training time. We find that on genetics, brain
imaging, and simulation data we obtain performance generally superior to
analytical methods
Recommended from our members
On Nonregularized Estimation of Psychological Networks.
An important goal for psychological science is developing methods to characterize relationships between variables. Customary approaches use structural equation models to connect latent factors to a number of observed measurements, or test causal hypotheses between observed variables. More recently, regularized partial correlation networks have been proposed as an alternative approach for characterizing relationships among variables through off-diagonal elements in the precision matrix. While the graphical Lasso (glasso) has emerged as the default network estimation method, it was optimized in fields outside of psychology with very different needs, such as high dimensional data where the number of variables (p) exceeds the number of observations (n). In this article, we describe the glasso method in the context of the fields where it was developed, and then we demonstrate that the advantages of regularization diminish in settings where psychological networks are often fitted ( p≪n ). We first show that improved properties of the precision matrix, such as eigenvalue estimation, and predictive accuracy with cross-validation are not always appreciable. We then introduce nonregularized methods based on multiple regression and a nonparametric bootstrap strategy, after which we characterize performance with extensive simulations. Our results demonstrate that the nonregularized methods can be used to reduce the false-positive rate, compared to glasso, and they appear to provide consistent performance across sparsity levels, sample composition (p/n), and partial correlation size. We end by reviewing recent findings in the statistics literature that suggest alternative methods often have superior performance than glasso, as well as suggesting areas for future research in psychology. The nonregularized methods have been implemented in the R package GGMnonreg
Estimation of high-dimensional brain connectivity networks using functional magnetic resonance imaging data
Recent studies in neuroimaging show increasing interest in mapping the brain connectivity. It can be potentially useful as biomarkers in identifying neuropsychiatric diseases as well as tool for psychological studies. This study considers the problem of modeling high-dimensional brain connectivity using statistical approach and estimate the connectivity between functional magnetic resonance imaging (fMRI) time series data measured from brain regions. The high-dimension of fMRI data (N) corresponding to the number of brain regions, is typically much larger than sample size or the number of time points taken (T). In this setting, the conventional connectivity estimators such as sample covariance and least-square (LS) estimator are no longer consistent and reliable. In addition, the traditional analysis assumes the brain network to be timeinvariant but recent neuroimaging studies show brain connectivity is changing over the experimental time course. This study developed a novel shrinkage approach to characterize directed brain connectivity in high-dimension. The shrinkage method is involved in incorporating shrinkage-based estimators (Ledoit-Wolf (LW) and Rao- Blackwell LW (RBLW)) in the covariance matrix and LS-based linear regression fitting of vector autoregressive (VAR) model, to reduce the mean squared error of estimates in both high-dimensional functional and effective connectivity. This allows better conditioned and invertible estimated matrix which is important to generate a reliable estimator. Then, the shrinkage-based VAR estimator has been extended to estimate time-evolving effective brain connectivity. The shrinkage-based methods are evaluated via simulations and applied to fMRI resting-state data. Simulation results show reduced mean squared error of estimated connectivity matrix in LW and RBLWbased estimators as compared to conventional sample covariance and LS estimators in both static and dynamic connectivity analysis. These estimators show robustness towards the increasing dimension. Result on real resting-state fMRI data showed that the proposed methods are able to identify functionally-related resting-state brain connectivity networks and evolution of connectivity states across time. It provides additional insights into human whole-brain connectivity during at rest as compared to previous finding particularly in the directionality of connectivity in high-dimensional brain networks
Small-Sample Analysis and Inference of Networked Dependency Structures from Complex Genomic Data
Die vorliegende Arbeit beschäftigt sich mit der statistischen Modellierung und Inferenz genetischer Netzwerke. Assoziationsstrukturen und wechselseitige Einflüsse sind ein wichtiges Thema in der Systembiologie. Genexpressionsdaten weisen eine hohe Dimensionalität auf, die geringen Stichprobenumfängen gegenübersteht ("small n, large p"). Die Analyse von Interaktionsstrukturen mit Hilfe graphischer Modelle ist demnach ein schlecht gestelltes (inverses) Problem, dessen Lösung Methoden zur Regularisierung erfordert. Ich schlage neuartige Schätzfunktionen für Kovarianzstrukturen und (partielle) Korrelationen vor. Diese basieren entweder auf Resampling-Verfahren oder auf Shrinkage zur Varianzreduktion. In der letzteren Methode wird die optimale Shrinkage Intensität analytisch berechnet. Im Vergleich zur klassischen Stichprobenkovarianzmatrix besitzt speziell diese Schätzfunktion wünschenswerte Eigenschaften im Sinne von gesteigerter Effizienz und von kleinerem mittleren quadratischen Fehler. Außerdem ergeben sich stets positiv definite und gut konditionierte Parameterschätzungen. Zur Bestimmung der Netzwerktopologie wird auf das Konzept graphischer Gaußscher Modelle zurückgegriffen, mit deren Hilfe sich sowohl marginale als auch bedingte Unabhängigkeiten darstellen lassen. Es wird eine Methode zur Modellselektion vorgestellt, die auf einer multiplen Testprozedur mit Kontrolle der False Discovery Rate beruht. Dabei wird die zugrunde liegende Nullverteilung adaptiv geschätzt. Das vorgeschlagene Framework ist rechentechnisch effizient und schneidet im Vergleich mit konkurrierenden Verfahren sowohl in Simulationen als auch in der Anwendung auf molekulare Daten sehr gut ab
- …