188 research outputs found
Deep Causal Learning: Representation, Discovery and Inference
Causal learning has attracted much attention in recent years because
causality reveals the essential relationship between things and indicates how
the world progresses. However, there are many problems and bottlenecks in
traditional causal learning methods, such as high-dimensional unstructured
variables, combinatorial optimization problems, unknown intervention,
unobserved confounders, selection bias and estimation bias. Deep causal
learning, that is, causal learning based on deep neural networks, brings new
insights for addressing these problems. While many deep learning-based causal
discovery and causal inference methods have been proposed, there is a lack of
reviews exploring the internal mechanism of deep learning to improve causal
learning. In this article, we comprehensively review how deep learning can
contribute to causal learning by addressing conventional challenges from three
aspects: representation, discovery, and inference. We point out that deep
causal learning is important for the theoretical extension and application
expansion of causal science and is also an indispensable part of general
artificial intelligence. We conclude the article with a summary of open issues
and potential directions for future work
CausaLM: Causal Model Explanation Through Counterfactual Language Models
Understanding predictions made by deep neural networks is notoriously
difficult, but also crucial to their dissemination. As all ML-based methods,
they are as good as their training data, and can also capture unwanted biases.
While there are tools that can help understand whether such biases exist, they
do not distinguish between correlation and causation, and might be ill-suited
for text-based models and for reasoning about high level language concepts. A
key problem of estimating the causal effect of a concept of interest on a given
model is that this estimation requires the generation of counterfactual
examples, which is challenging with existing generation technology. To bridge
that gap, we propose CausaLM, a framework for producing causal model
explanations using counterfactual language representation models. Our approach
is based on fine-tuning of deep contextualized embedding models with auxiliary
adversarial tasks derived from the causal graph of the problem. Concretely, we
show that by carefully choosing auxiliary adversarial pre-training tasks,
language representation models such as BERT can effectively learn a
counterfactual representation for a given concept of interest, and be used to
estimate its true causal effect on model performance. A byproduct of our method
is a language representation model that is unaffected by the tested concept,
which can be useful in mitigating unwanted bias ingrained in the data.Comment: Our code and data are available at:
https://amirfeder.github.io/CausaLM/ Under review for the Computational
Linguistics journa
Causal Discovery in Physical Systems from Videos
Causal discovery is at the core of human cognition. It enables us to reason about the environment and make counterfactual predictions about unseen scenarios that can vastly differ from our previous experiences. We consider the task of causal discovery from videos in an end-to-end fashion without supervision on the ground-truth graph structure. In particular, our goal is to discover the structural dependencies among environmental and object variables: inferring the type and strength of interactions that have a causal effect on the behavior of the dynamical system. Our model consists of (a) a perception module that extracts a semantically meaningful and temporally consistent keypoint representation from images, (b) an inference module for determining the graph distribution induced by the detected keypoints, and (c) a dynamics module that can predict the future by conditioning on the inferred graph. We assume access to different configurations and environmental conditions, i.e., data from unknown interventions on the underlying system; thus, we can hope to discover the correct underlying causal graph without explicit interventions. We evaluate our method in a planar multi-body interaction environment and scenarios involving fabrics of different shapes like shirts and pants. Experiments demonstrate that our model can correctly identify the interactions from a short sequence of images and make long-term future predictions. The causal structure assumed by the model also allows it to make counterfactual predictions and extrapolate to systems of unseen interaction graphs or graphs of various sizes
Causal Discovery in Physical Systems from Videos
Causal discovery is at the core of human cognition. It enables us to reason
about the environment and make counterfactual predictions about unseen
scenarios that can vastly differ from our previous experiences. We consider the
task of causal discovery from videos in an end-to-end fashion without
supervision on the ground-truth graph structure. In particular, our goal is to
discover the structural dependencies among environmental and object variables:
inferring the type and strength of interactions that have a causal effect on
the behavior of the dynamical system. Our model consists of (a) a perception
module that extracts a semantically meaningful and temporally consistent
keypoint representation from images, (b) an inference module for determining
the graph distribution induced by the detected keypoints, and (c) a dynamics
module that can predict the future by conditioning on the inferred graph. We
assume access to different configurations and environmental conditions, i.e.,
data from unknown interventions on the underlying system; thus, we can hope to
discover the correct underlying causal graph without explicit interventions. We
evaluate our method in a planar multi-body interaction environment and
scenarios involving fabrics of different shapes like shirts and pants.
Experiments demonstrate that our model can correctly identify the interactions
from a short sequence of images and make long-term future predictions. The
causal structure assumed by the model also allows it to make counterfactual
predictions and extrapolate to systems of unseen interaction graphs or graphs
of various sizes.Comment: NeurIPS 2020. Project page: https://yunzhuli.github.io/V-CDN
- …