6 research outputs found
Linear Causal Disentanglement via Interventions
Causal disentanglement seeks a representation of data involving latent
variables that relate to one another via a causal model. A representation is
identifiable if both the latent model and the transformation from latent to
observed variables are unique. In this paper, we study observed variables that
are a linear transformation of a linear latent causal model. Data from
interventions are necessary for identifiability: if one latent variable is
missing an intervention, we show that there exist distinct models that cannot
be distinguished. Conversely, we show that a single intervention on each latent
variable is sufficient for identifiability. Our proof uses a generalization of
the RQ decomposition of a matrix that replaces the usual orthogonal and upper
triangular conditions with analogues depending on a partial order on the rows
of the matrix, with partial order determined by a latent causal model. We
corroborate our theoretical results with a method for causal disentanglement
that accurately recovers a latent causal model
Causal Discovery in Linear Latent Variable Models Subject to Measurement Error
We focus on causal discovery in the presence of measurement error in linear
systems where the mixing matrix, i.e., the matrix indicating the independent
exogenous noise terms pertaining to the observed variables, is identified up to
permutation and scaling of the columns. We demonstrate a somewhat surprising
connection between this problem and causal discovery in the presence of
unobserved parentless causes, in the sense that there is a mapping, given by
the mixing matrix, between the underlying models to be inferred in these
problems. Consequently, any identifiability result based on the mixing matrix
for one model translates to an identifiability result for the other model. We
characterize to what extent the causal models can be identified under a
two-part faithfulness assumption. Under only the first part of the assumption
(corresponding to the conventional definition of faithfulness), the structure
can be learned up to the causal ordering among an ordered grouping of the
variables but not all the edges across the groups can be identified. We further
show that if both parts of the faithfulness assumption are imposed, the
structure can be learned up to a more refined ordered grouping. As a result of
this refinement, for the latent variable model with unobserved parentless
causes, the structure can be identified. Based on our theoretical results, we
propose causal structure learning methods for both models, and evaluate their
performance on synthetic data.Comment: Accepted at 36th Conference on Neural Information Processing Systems
(NeurIPS 2022
Learning nonparametric latent causal graphs with unknown interventions
We establish conditions under which latent causal graphs are
nonparametrically identifiable and can be reconstructed from unknown
interventions in the latent space. Our primary focus is the identification of
the latent structure in measurement models without parametric assumptions such
as linearity or Gaussianity. Moreover, we do not assume the number of hidden
variables is known, and we show that at most one unknown intervention per
hidden variable is needed. This extends a recent line of work on learning
causal representations from observations and interventions. The proofs are
constructive and introduce two new graphical concepts -- imaginary subsets and
isolated edges -- that may be useful in their own right. As a matter of
independent interest, the proofs also involve a novel characterization of the
limits of edge orientations within the equivalence class of DAGs induced by
unknown interventions. These are the first results to characterize the
conditions under which causal representations are identifiable without making
any parametric assumptions in a general setting with unknown interventions and
without faithfulness.Comment: To appear at NeurIPS 202