8 research outputs found
Learning Independent Causal Mechanisms
Statistical learning relies upon data sampled from a distribution, and we
usually do not care what actually generated it in the first place. From the
point of view of causal modeling, the structure of each distribution is induced
by physical mechanisms that give rise to dependences between observables.
Mechanisms, however, can be meaningful autonomous modules of generative models
that make sense beyond a particular entailed data distribution, lending
themselves to transfer between problems. We develop an algorithm to recover a
set of independent (inverse) mechanisms from a set of transformed data points.
The approach is unsupervised and based on a set of experts that compete for
data generated by the mechanisms, driving specialization. We analyze the
proposed method in a series of experiments on image data. Each expert learns to
map a subset of the transformed data back to a reference distribution. The
learned mechanisms generalize to novel domains. We discuss implications for
transfer learning and links to recent trends in generative modeling.Comment: ICML 201
Invariant Models for Causal Transfer Learning
Methods of transfer learning try to combine knowledge from several related
tasks (or domains) to improve performance on a test task. Inspired by causal
methodology, we relax the usual covariate shift assumption and assume that it
holds true for a subset of predictor variables: the conditional distribution of
the target variable given this subset of predictors is invariant over all
tasks. We show how this assumption can be motivated from ideas in the field of
causality. We focus on the problem of Domain Generalization, in which no
examples from the test task are observed. We prove that in an adversarial
setting using this subset for prediction is optimal in Domain Generalization;
we further provide examples, in which the tasks are sufficiently diverse and
the estimator therefore outperforms pooling the data, even on average. If
examples from the test task are available, we also provide a method to transfer
knowledge from the training tasks and exploit all available features for
prediction. However, we provide no guarantees for this method. We introduce a
practical method which allows for automatic inference of the above subset and
provide corresponding code. We present results on synthetic data sets and a
gene deletion data set
On the unity between observational and experimental causal discovery
In “Flagpoles anyone? Causal and explanatory asymmetries”, James Woodward supplements his celebrated interventionist account of causation and explanation with a set of new ideas about causal and explanatory asymmetries, which he extracts from some cutting-edge methods for causal discovery from observational data. Among other things, Woodward draws interesting connections between observational causal discovery and interventionist themes that are inspired in the first place by experimental causal discovery, alluding to a sort of unity between observational and experimental causal discovery. In this paper, I make explicit what I take to be the implicated unity. Like experimental causal discovery, observational causal discovery also relies on interventions (or exogenous variations, to be more accurate), albeit interventions that are not carried out by investigators and hence need to be detected as part of the inference. The observational patterns appealed to in observational causal discovery are not only surrogates for would-be interventions, as Woodward sometimes puts it; they also serve to mark relevant interventions that actually happen in the data generating process
Flagpoles Anyone? Causal and Explanatory Asymmetries
This paper discusses some procedures developed in recent work in machine learning for inferring causal direction from observational data. The role of independence and invariance assumptions is emphasized. Several familiar examples including Hempel’s flagpole problem are explored in the light of these ideas. The framework is then applied to problems having to do with explanatory direction in non-causal explanation
Flagpoles anyone? Causal and explanatory asymmetries
This paper discusses some procedures developed in recent work in machine learning for inferring causal direction from observational data. The role of independence and invariance assumptions is emphasized. Several familiar examples including Hempel’s flagpole problem are explored in the light of these ideas. The framework is then applied to problems having to do with explanatory direction in non-causal explanation
Annual report on research activities 2014/15
https://commons.ln.edu.hk/research_annual_report/1013/thumbnail.jp
Distinguishing Cause from Effect Based on Exogeneity
Recent developments in structural equation modeling have produced several
methods that can usually distinguish cause from effect in the two-variable
case. For that purpose, however, one has to impose substantial structural
constraints or smoothness assumptions on the functional causal models. In this
paper, we consider the problem of determining the causal direction from a
related but different point of view, and propose a new framework for causal
direction determination. We show that it is possible to perform causal
inference based on the condition that the cause is "exogenous" for the
parameters involved in the generating process from the cause to the effect. In
this way, we avoid the structural constraints required by the SEM-based
approaches. In particular, we exploit nonparametric methods to estimate
marginal and conditional distributions, and propose a bootstrap-based approach
to test for the exogeneity condition; the testing results indicate the causal
direction between two variables. The proposed method is validated on both
synthetic and real data.Comment: 11 pages, 4 figures, published in Proceedings of the 15th conference
on Theoretical Aspects of Rationality and Knowledge (TARK'15