279 research outputs found
Topological and Algebraic Properties of Chernoff Information between Gaussian Graphs
In this paper, we want to find out the determining factors of Chernoff
information in distinguishing a set of Gaussian graphs. We find that Chernoff
information of two Gaussian graphs can be determined by the generalized
eigenvalues of their covariance matrices. We find that the unit generalized
eigenvalue doesn't affect Chernoff information and its corresponding dimension
doesn't provide information for classification purpose. In addition, we can
provide a partial ordering using Chernoff information between a series of
Gaussian trees connected by independent grafting operations. With the
relationship between generalized eigenvalues and Chernoff information, we can
do optimal linear dimension reduction with least loss of information for
classification.Comment: Submitted to Allerton2018, and this version contains proofs of the
propositions in the pape
Learning Latent Tree Graphical Models
We study the problem of learning a latent tree graphical model where samples
are available only from a subset of variables. We propose two consistent and
computationally efficient algorithms for learning minimal latent trees, that
is, trees without any redundant hidden nodes. Unlike many existing methods, the
observed nodes (or variables) are not constrained to be leaf nodes. Our first
algorithm, recursive grouping, builds the latent tree recursively by
identifying sibling groups using so-called information distances. One of the
main contributions of this work is our second algorithm, which we refer to as
CLGrouping. CLGrouping starts with a pre-processing procedure in which a tree
over the observed variables is constructed. This global step groups the
observed nodes that are likely to be close to each other in the true latent
tree, thereby guiding subsequent recursive grouping (or equivalent procedures)
on much smaller subsets of variables. This results in more accurate and
efficient learning of latent trees. We also present regularized versions of our
algorithms that learn latent tree approximations of arbitrary distributions. We
compare the proposed algorithms to other methods by performing extensive
numerical experiments on various latent tree graphical models such as hidden
Markov models and star graphs. In addition, we demonstrate the applicability of
our methods on real-world datasets by modeling the dependency structure of
monthly stock returns in the S&P index and of the words in the 20 newsgroups
dataset
- …