198,904 research outputs found
High-Dimensional Gaussian Graphical Model Selection: Walk Summability and Local Separation Criterion
We consider the problem of high-dimensional Gaussian graphical model
selection. We identify a set of graphs for which an efficient estimation
algorithm exists, and this algorithm is based on thresholding of empirical
conditional covariances. Under a set of transparent conditions, we establish
structural consistency (or sparsistency) for the proposed algorithm, when the
number of samples n=omega(J_{min}^{-2} log p), where p is the number of
variables and J_{min} is the minimum (absolute) edge potential of the graphical
model. The sufficient conditions for sparsistency are based on the notion of
walk-summability of the model and the presence of sparse local vertex
separators in the underlying graph. We also derive novel non-asymptotic
necessary conditions on the number of samples required for sparsistency
Active Learning for Undirected Graphical Model Selection
This paper studies graphical model selection, i.e., the problem of estimating
a graph of statistical relationships among a collection of random variables.
Conventional graphical model selection algorithms are passive, i.e., they
require all the measurements to have been collected before processing begins.
We propose an active learning algorithm that uses junction tree representations
to adapt future measurements based on the information gathered from prior
measurements. We prove that, under certain conditions, our active learning
algorithm requires fewer scalar measurements than any passive algorithm to
reliably estimate a graph. A range of numerical results validate our theory and
demonstrates the benefits of active learning.Comment: AISTATS 201
Selection and Estimation for Mixed Graphical Models
We consider the problem of estimating the parameters in a pairwise graphical
model in which the distribution of each node, conditioned on the others, may
have a different parametric form. In particular, we assume that each node's
conditional distribution is in the exponential family. We identify restrictions
on the parameter space required for the existence of a well-defined joint
density, and establish the consistency of the neighbourhood selection approach
for graph reconstruction in high dimensions when the true underlying graph is
sparse. Motivated by our theoretical results, we investigate the selection of
edges between nodes whose conditional distributions take different parametric
forms, and show that efficiency can be gained if edge estimates obtained from
the regressions of particular nodes are used to reconstruct the graph. These
results are illustrated with examples of Gaussian, Bernoulli, Poisson and
exponential distributions. Our theoretical findings are corroborated by
evidence from simulation studies
Graphical Models for Inference Under Outcome-Dependent Sampling
We consider situations where data have been collected such that the sampling
depends on the outcome of interest and possibly further covariates, as for
instance in case-control studies. Graphical models represent assumptions about
the conditional independencies among the variables. By including a node for the
sampling indicator, assumptions about sampling processes can be made explicit.
We demonstrate how to read off such graphs whether consistent estimation of the
association between exposure and outcome is possible. Moreover, we give
sufficient graphical conditions for testing and estimating the causal effect of
exposure on outcome. The practical use is illustrated with a number of
examples.Comment: Published in at http://dx.doi.org/10.1214/10-STS340 the Statistical
Science (http://www.imstat.org/sts/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Learning loopy graphical models with latent variables: Efficient methods and guarantees
The problem of structure estimation in graphical models with latent variables
is considered. We characterize conditions for tractable graph estimation and
develop efficient methods with provable guarantees. We consider models where
the underlying Markov graph is locally tree-like, and the model is in the
regime of correlation decay. For the special case of the Ising model, the
number of samples required for structural consistency of our method scales
as , where p is the
number of variables, is the minimum edge potential, is
the depth (i.e., distance from a hidden node to the nearest observed nodes),
and is a parameter which depends on the bounds on node and edge
potentials in the Ising model. Necessary conditions for structural consistency
under any algorithm are derived and our method nearly matches the lower bound
on sample requirements. Further, the proposed method is practical to implement
and provides flexibility to control the number of latent variables and the
cycle lengths in the output graph.Comment: Published in at http://dx.doi.org/10.1214/12-AOS1070 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
- …