3,644 research outputs found
Parameter identifiability of discrete Bayesian networks with hidden variables
Identifiability of parameters is an essential property for a statistical
model to be useful in most settings. However, establishing parameter
identifiability for Bayesian networks with hidden variables remains
challenging. In the context of finite state spaces, we give algebraic arguments
establishing identifiability of some special models on small DAGs. We also
establish that, for fixed state spaces, generic identifiability of parameters
depends only on the Markov equivalence class of the DAG. To illustrate the use
of these results, we investigate identifiability for all binary Bayesian
networks with up to five variables, one of which is hidden and parental to all
observable ones. Surprisingly, some of these models have parameterizations that
are generically 4-to-one, and not 2-to-one as label swapping of the hidden
states would suggest. This leads to interesting difficulties in interpreting
causal effects.Comment: 23 page
Identifiability of parameters in latent structure models with many observed variables
While hidden class models of various types arise in many statistical
applications, it is often difficult to establish the identifiability of their
parameters. Focusing on models in which there is some structure of independence
of some of the observed variables conditioned on hidden ones, we demonstrate a
general approach for establishing identifiability utilizing algebraic
arguments. A theorem of J. Kruskal for a simple latent-class model with finite
state space lies at the core of our results, though we apply it to a diverse
set of models. These include mixtures of both finite and nonparametric product
distributions, hidden Markov models and random graph mixture models, and lead
to a number of new results and improvements to old ones. In the parametric
setting, this approach indicates that for such models, the classical definition
of identifiability is typically too strong. Instead generic identifiability
holds, which implies that the set of nonidentifiable parameters has measure
zero, so that parameter inference is still meaningful. In particular, this
sheds light on the properties of finite mixtures of Bernoulli products, which
have been used for decades despite being known to have nonidentifiable
parameters. In the nonparametric setting, we again obtain identifiability only
when certain restrictions are placed on the distributions that are mixed, but
we explicitly describe the conditions.Comment: Published in at http://dx.doi.org/10.1214/09-AOS689 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Exploring dependence between categorical variables: benefits and limitations of using variable selection within Bayesian clustering in relation to log-linear modelling with interaction terms
This manuscript is concerned with relating two approaches that can be used to
explore complex dependence structures between categorical variables, namely
Bayesian partitioning of the covariate space incorporating a variable selection
procedure that highlights the covariates that drive the clustering, and
log-linear modelling with interaction terms. We derive theoretical results on
this relation and discuss if they can be employed to assist log-linear model
determination, demonstrating advantages and limitations with simulated and real
data sets. The main advantage concerns sparse contingency tables. Inferences
from clustering can potentially reduce the number of covariates considered and,
subsequently, the number of competing log-linear models, making the exploration
of the model space feasible. Variable selection within clustering can inform on
marginal independence in general, thus allowing for a more efficient
exploration of the log-linear model space. However, we show that the clustering
structure is not informative on the existence of interactions in a consistent
manner. This work is of interest to those who utilize log-linear models, as
well as practitioners such as epidemiologists that use clustering models to
reduce the dimensionality in the data and to reveal interesting patterns on how
covariates combine.Comment: Preprin
- …