2 research outputs found
Robust estimation of tree structured Gaussian Graphical Model
Consider jointly Gaussian random variables whose conditional independence
structure is specified by a graphical model. If we observe realizations of the
variables, we can compute the covariance matrix, and it is well known that the
support of the inverse covariance matrix corresponds to the edges of the
graphical model. Instead, suppose we only have noisy observations. If the noise
at each node is independent, we can compute the sum of the covariance matrix
and an unknown diagonal. The inverse of this sum is (in general) dense. We ask:
can the original independence structure be recovered? We address this question
for tree structured graphical models. We prove that this problem is
unidentifiable, but show that this unidentifiability is limited to a small
class of candidate trees. We further present additional constraints under which
the problem is identifiable. Finally, we provide an O(n^3) algorithm to find
this equivalence class of trees.Comment: 12 pages, 6 figure
Adaptive Sparsity in Gaussian Graphical Models
An effective approach to structure learning and parameter estimation for Gaussian graphical models is to impose a sparsity prior, such as a Laplace prior, on the entries of the precision matrix. Such an approach involves a hyperparameter that must be tuned to control the amount of sparsity. In this paper, we introduce a parameter-free method for estimating a precision matrix with sparsity that adapts to the data automatically. We achieve this by formulating a hierarchical Bayesian model of the precision matrix with a noninformative Jeffreys β hyperprior. We also naturally enforce the symmetry and positivedefiniteness constraints on the precision matrix by parameterizing it with the Cholesky decomposition. Experiments on simulated and real (cell signaling) data demonstrate that the proposed approach not only automatically adapts the sparsity of the model, but it also results in improved estimates of the precision matrix compared to the Laplace prior model with sparsity parameter chosen by cross-validation. 1