15,497 research outputs found
High-Dimensional Gaussian Graphical Model Selection: Walk Summability and Local Separation Criterion
We consider the problem of high-dimensional Gaussian graphical model
selection. We identify a set of graphs for which an efficient estimation
algorithm exists, and this algorithm is based on thresholding of empirical
conditional covariances. Under a set of transparent conditions, we establish
structural consistency (or sparsistency) for the proposed algorithm, when the
number of samples n=omega(J_{min}^{-2} log p), where p is the number of
variables and J_{min} is the minimum (absolute) edge potential of the graphical
model. The sufficient conditions for sparsistency are based on the notion of
walk-summability of the model and the presence of sparse local vertex
separators in the underlying graph. We also derive novel non-asymptotic
necessary conditions on the number of samples required for sparsistency
Model selection and local geometry
We consider problems in model selection caused by the geometry of models
close to their points of intersection. In some cases---including common classes
of causal or graphical models, as well as time series models---distinct models
may nevertheless have identical tangent spaces. This has two immediate
consequences: first, in order to obtain constant power to reject one model in
favour of another we need local alternative hypotheses that decrease to the
null at a slower rate than the usual parametric (typically we will
require or slower); in other words, to distinguish between the
models we need large effect sizes or very large sample sizes. Second, we show
that under even weaker conditions on their tangent cones, models in these
classes cannot be made simultaneously convex by a reparameterization.
This shows that Bayesian network models, amongst others, cannot be learned
directly with a convex method similar to the graphical lasso. However, we are
able to use our results to suggest methods for model selection that learn the
tangent space directly, rather than the model itself. In particular, we give a
generic algorithm for learning Bayesian network models
- β¦