25,003 research outputs found
Markov models for fMRI correlation structure: is brain functional connectivity small world, or decomposable into networks?
Correlations in the signal observed via functional Magnetic Resonance Imaging
(fMRI), are expected to reveal the interactions in the underlying neural
populations through hemodynamic response. In particular, they highlight
distributed set of mutually correlated regions that correspond to brain
networks related to different cognitive functions. Yet graph-theoretical
studies of neural connections give a different picture: that of a highly
integrated system with small-world properties: local clustering but with short
pathways across the complete structure. We examine the conditional independence
properties of the fMRI signal, i.e. its Markov structure, to find realistic
assumptions on the connectivity structure that are required to explain the
observed functional connectivity. In particular we seek a decomposition of the
Markov structure into segregated functional networks using decomposable graphs:
a set of strongly-connected and partially overlapping cliques. We introduce a
new method to efficiently extract such cliques on a large, strongly-connected
graph. We compare methods learning different graph structures from functional
connectivity by testing the goodness of fit of the model they learn on new
data. We find that summarizing the structure as strongly-connected networks can
give a good description only for very large and overlapping networks. These
results highlight that Markov models are good tools to identify the structure
of brain connectivity from fMRI signals, but for this purpose they must reflect
the small-world properties of the underlying neural systems
From Correlation to Causation: Estimation of Effective Connectivity from Continuous Brain Signals based on Zero-Lag Covariance
Knowing brain connectivity is of great importance both in basic research and
for clinical applications. We are proposing a method to infer directed
connectivity from zero-lag covariances of neuronal activity recorded at
multiple sites. This allows us to identify causal relations that are reflected
in neuronal population activity. To derive our strategy, we assume a generic
linear model of interacting continuous variables, the components of which
represent the activity of local neuronal populations. The suggested method for
inferring connectivity from recorded signals exploits the fact that the
covariance matrix derived from the observed activity contains information about
the existence, the direction and the sign of connections. Assuming a sparsely
coupled network, we disambiguate the underlying causal structure via
-minimization. In general, this method is suited to infer effective
connectivity from resting state data of various types. We show that our method
is applicable over a broad range of structural parameters regarding network
size and connection probability of the network. We also explored parameters
affecting its activity dynamics, like the eigenvalue spectrum. Also, based on
the simulation of suitable Ornstein-Uhlenbeck processes to model BOLD dynamics,
we show that with our method it is possible to estimate directed connectivity
from zero-lag covariances derived from such signals. In this study, we consider
measurement noise and unobserved nodes as additional confounding factors.
Furthermore, we investigate the amount of data required for a reliable
estimate. Additionally, we apply the proposed method on a fMRI dataset. The
resulting network exhibits a tendency for close-by areas being connected as
well as inter-hemispheric connections between corresponding areas. Also, we
found that a large fraction of identified connections were inhibitory.Comment: 18 pages, 10 figure
Learning to Discover Sparse Graphical Models
We consider structure discovery of undirected graphical models from
observational data. Inferring likely structures from few examples is a complex
task often requiring the formulation of priors and sophisticated inference
procedures. Popular methods rely on estimating a penalized maximum likelihood
of the precision matrix. However, in these approaches structure recovery is an
indirect consequence of the data-fit term, the penalty can be difficult to
adapt for domain-specific knowledge, and the inference is computationally
demanding. By contrast, it may be easier to generate training samples of data
that arise from graphs with the desired structure properties. We propose here
to leverage this latter source of information as training data to learn a
function, parametrized by a neural network that maps empirical covariance
matrices to estimated graph structures. Learning this function brings two
benefits: it implicitly models the desired structure or sparsity properties to
form suitable priors, and it can be tailored to the specific problem of edge
structure discovery, rather than maximizing data likelihood. Applying this
framework, we find our learnable graph-discovery method trained on synthetic
data generalizes well: identifying relevant edges in both synthetic and real
data, completely unknown at training time. We find that on genetics, brain
imaging, and simulation data we obtain performance generally superior to
analytical methods
Functional Regression
Functional data analysis (FDA) involves the analysis of data whose ideal
units of observation are functions defined on some continuous domain, and the
observed data consist of a sample of functions taken from some population,
sampled on a discrete grid. Ramsay and Silverman's 1997 textbook sparked the
development of this field, which has accelerated in the past 10 years to become
one of the fastest growing areas of statistics, fueled by the growing number of
applications yielding this type of data. One unique characteristic of FDA is
the need to combine information both across and within functions, which Ramsay
and Silverman called replication and regularization, respectively. This article
will focus on functional regression, the area of FDA that has received the most
attention in applications and methodological development. First will be an
introduction to basis functions, key building blocks for regularization in
functional regression methods, followed by an overview of functional regression
methods, split into three types: [1] functional predictor regression
(scalar-on-function), [2] functional response regression (function-on-scalar)
and [3] function-on-function regression. For each, the role of replication and
regularization will be discussed and the methodological development described
in a roughly chronological manner, at times deviating from the historical
timeline to group together similar methods. The primary focus is on modeling
and methodology, highlighting the modeling structures that have been developed
and the various regularization approaches employed. At the end is a brief
discussion describing potential areas of future development in this field
- …