70 research outputs found
Learning Latent Tree Graphical Models
We study the problem of learning a latent tree graphical model where samples
are available only from a subset of variables. We propose two consistent and
computationally efficient algorithms for learning minimal latent trees, that
is, trees without any redundant hidden nodes. Unlike many existing methods, the
observed nodes (or variables) are not constrained to be leaf nodes. Our first
algorithm, recursive grouping, builds the latent tree recursively by
identifying sibling groups using so-called information distances. One of the
main contributions of this work is our second algorithm, which we refer to as
CLGrouping. CLGrouping starts with a pre-processing procedure in which a tree
over the observed variables is constructed. This global step groups the
observed nodes that are likely to be close to each other in the true latent
tree, thereby guiding subsequent recursive grouping (or equivalent procedures)
on much smaller subsets of variables. This results in more accurate and
efficient learning of latent trees. We also present regularized versions of our
algorithms that learn latent tree approximations of arbitrary distributions. We
compare the proposed algorithms to other methods by performing extensive
numerical experiments on various latent tree graphical models such as hidden
Markov models and star graphs. In addition, we demonstrate the applicability of
our methods on real-world datasets by modeling the dependency structure of
monthly stock returns in the S&P index and of the words in the 20 newsgroups
dataset
Beyond Physical Connections: Tree Models in Human Pose Estimation
Simple tree models for articulated objects prevails in the last decade.
However, it is also believed that these simple tree models are not capable of
capturing large variations in many scenarios, such as human pose estimation.
This paper attempts to address three questions: 1) are simple tree models
sufficient? more specifically, 2) how to use tree models effectively in human
pose estimation? and 3) how shall we use combined parts together with single
parts efficiently?
Assuming we have a set of single parts and combined parts, and the goal is to
estimate a joint distribution of their locations. We surprisingly find that no
latent variables are introduced in the Leeds Sport Dataset (LSP) during
learning latent trees for deformable model, which aims at approximating the
joint distributions of body part locations using minimal tree structure. This
suggests one can straightforwardly use a mixed representation of single and
combined parts to approximate their joint distribution in a simple tree model.
As such, one only needs to build Visual Categories of the combined parts, and
then perform inference on the learned latent tree. Our method outperformed the
state of the art on the LSP, both in the scenarios when the training images are
from the same dataset and from the PARSE dataset. Experiments on animal images
from the VOC challenge further support our findings.Comment: CVPR 201
Simultaneous Hand Pose and Skeleton Bone-Lengths Estimation from a Single Depth Image
Articulated hand pose estimation is a challenging task for human-computer
interaction. The state-of-the-art hand pose estimation algorithms work only
with one or a few subjects for which they have been calibrated or trained.
Particularly, the hybrid methods based on learning followed by model fitting or
model based deep learning do not explicitly consider varying hand shapes and
sizes. In this work, we introduce a novel hybrid algorithm for estimating the
3D hand pose as well as bone-lengths of the hand skeleton at the same time,
from a single depth image. The proposed CNN architecture learns hand pose
parameters and scale parameters associated with the bone-lengths
simultaneously. Subsequently, a new hybrid forward kinematics layer employs
both parameters to estimate 3D joint positions of the hand. For end-to-end
training, we combine three public datasets NYU, ICVL and MSRA-2015 in one
unified format to achieve large variation in hand shapes and sizes. Among
hybrid methods, our method shows improved accuracy over the state-of-the-art on
the combined dataset and the ICVL dataset that contain multiple subjects. Also,
our algorithm is demonstrated to work well with unseen images.Comment: This paper has been accepted and presented in 3DV-2017 conference
held at Qingdao, China. http://irc.cs.sdu.edu.cn/3dv
Multi-Object Classification and Unsupervised Scene Understanding Using Deep Learning Features and Latent Tree Probabilistic Models
Deep learning has shown state-of-art classification performance on datasets
such as ImageNet, which contain a single object in each image. However,
multi-object classification is far more challenging. We present a unified
framework which leverages the strengths of multiple machine learning methods,
viz deep learning, probabilistic models and kernel methods to obtain
state-of-art performance on Microsoft COCO, consisting of non-iconic images. We
incorporate contextual information in natural images through a conditional
latent tree probabilistic model (CLTM), where the object co-occurrences are
conditioned on the extracted fc7 features from pre-trained Imagenet CNN as
input. We learn the CLTM tree structure using conditional pairwise
probabilities for object co-occurrences, estimated through kernel methods, and
we learn its node and edge potentials by training a new 3-layer neural network,
which takes fc7 features as input. Object classification is carried out via
inference on the learnt conditional tree model, and we obtain significant gain
in precision-recall and F-measures on MS-COCO, especially for difficult object
categories. Moreover, the latent variables in the CLTM capture scene
information: the images with top activations for a latent node have common
themes such as being a grasslands or a food scene, and on on. In addition, we
show that a simple k-means clustering of the inferred latent nodes alone
significantly improves scene classification performance on the MIT-Indoor
dataset, without the need for any retraining, and without using scene labels
during training. Thus, we present a unified framework for multi-object
classification and unsupervised scene understanding
Towards Building Deep Networks with Bayesian Factor Graphs
We propose a Multi-Layer Network based on the Bayesian framework of the
Factor Graphs in Reduced Normal Form (FGrn) applied to a two-dimensional
lattice. The Latent Variable Model (LVM) is the basic building block of a
quadtree hierarchy built on top of a bottom layer of random variables that
represent pixels of an image, a feature map, or more generally a collection of
spatially distributed discrete variables. The multi-layer architecture
implements a hierarchical data representation that, via belief propagation, can
be used for learning and inference. Typical uses are pattern completion,
correction and classification. The FGrn paradigm provides great flexibility and
modularity and appears as a promising candidate for building deep networks: the
system can be easily extended by introducing new and different (in cardinality
and in type) variables. Prior knowledge, or supervised information, can be
introduced at different scales. The FGrn paradigm provides a handy way for
building all kinds of architectures by interconnecting only three types of
units: Single Input Single Output (SISO) blocks, Sources and Replicators. The
network is designed like a circuit diagram and the belief messages flow
bidirectionally in the whole system. The learning algorithms operate only
locally within each block. The framework is demonstrated in this paper in a
three-layer structure applied to images extracted from a standard data set.Comment: Submitted for journal publicatio
Causal Dependence Tree Approximations of Joint Distributions for Multiple Random Processes
We investigate approximating joint distributions of random processes with
causal dependence tree distributions. Such distributions are particularly
useful in providing parsimonious representation when there exists causal
dynamics among processes. By extending the results by Chow and Liu on
dependence tree approximations, we show that the best causal dependence tree
approximation is the one which maximizes the sum of directed informations on
its edges, where best is defined in terms of minimizing the KL-divergence
between the original and the approximate distribution. Moreover, we describe a
low-complexity algorithm to efficiently pick this approximate distribution.Comment: 9 pages, 15 figure
Information Theoretic Structure Learning with Confidence
Information theoretic measures (e.g. the Kullback Liebler divergence and
Shannon mutual information) have been used for exploring possibly nonlinear
multivariate dependencies in high dimension. If these dependencies are assumed
to follow a Markov factor graph model, this exploration process is called
structure discovery. For discrete-valued samples, estimates of the information
divergence over the parametric class of multinomial models lead to structure
discovery methods whose mean squared error achieves parametric convergence
rates as the sample size grows. However, a naive application of this method to
continuous nonparametric multivariate models converges much more slowly. In
this paper we introduce a new method for nonparametric structure discovery that
uses weighted ensemble divergence estimators that achieve parametric
convergence rates and obey an asymptotic central limit theorem that facilitates
hypothesis testing and other types of statistical validation.Comment: 10 pages, 3 figure
Synthesis of Gaussian Trees with Correlation Sign Ambiguity: An Information Theoretic Approach
In latent Gaussian trees the pairwise correlation signs between the variables
are intrinsically unrecoverable. Such information is vital since it completely
determines the direction in which two variables are associated. In this work,
we resort to information theoretical approaches to achieve two fundamental
goals: First, we quantify the amount of information loss due to unrecoverable
sign information. Second, we show the importance of such information in
determining the maximum achievable rate region, in which the observed output
vector can be synthesized, given its probability density function. In
particular, we model the graphical model as a communication channel and propose
a new layered encoding framework to synthesize observed data using upper layer
Gaussian inputs and independent Bernoulli correlation sign inputs from each
layer. We find the achievable rate region for the rate tuples of multi-layer
latent Gaussian messages to synthesize the desired observables.Comment: 14 pages, 9 figures, part of this work is submitted to Allerton 2016
conference, UIUC, IL, US
The correlation space of Gaussian latent tree models and model selection without fitting
We provide a complete description of possible covariance matrices consistent
with a Gaussian latent tree model for any tree. We then present techniques for
utilising these constraints to assess whether observed data is compatible with
that Gaussian latent tree model. Our method does not require us first to fit
such a tree. We demonstrate the usefulness of the inverse-Wishart distribution
for performing preliminary assessments of tree-compatibility using
semialgebraic constraints. Using results from Drton et al. (2008) we then
provide the appropriate moments required for test statistics for assessing
adherence to these equality constraints. These are shown to be effective even
for small sample sizes and can be easily adjusted to test either the entire
model or only certain macrostructures hypothesized within the tree. We
illustrate our exploratory tetrad analysis using a linguistic application and
our confirmatory tetrad analysis using a biological application.Comment: 15 page
- …