225 research outputs found
Contagion Effect Estimation Using Proximal Embeddings
Contagion effect refers to the causal effect of peers' behavior on the
outcome of an individual in social networks. Contagion can be confounded due to
latent homophily which makes contagion effect estimation very hard: nodes in a
homophilic network tend to have ties to peers with similar attributes and can
behave similarly without influencing one another. One way to account for latent
homophily is by considering proxies for the unobserved confounders. However, as
we demonstrate in this paper, existing proxy-based methods for contagion effect
estimation have a very high variance when the proxies are high-dimensional. To
address this issue, we introduce a novel framework, Proximal Embeddings
(ProEmb), that integrates variational autoencoders with adversarial networks to
create low-dimensional representations of high-dimensional proxies and help
with identifying contagion effects. While VAEs have been used previously for
representation learning in causal inference, a novel aspect of our approach is
the additional component of adversarial networks to balance the representations
of different treatment groups, which is essential in causal inference from
observational data where these groups typically come from different
distributions. We empirically show that our method significantly increases the
accuracy and reduces the variance of contagion effect estimation in
observational network data compared to state-of-the-art methods
Deep Causal Learning for Robotic Intelligence
This invited review discusses causal learning in the context of robotic
intelligence. The paper introduced the psychological findings on causal
learning in human cognition, then it introduced the traditional statistical
solutions on causal discovery and causal inference. The paper reviewed recent
deep causal learning algorithms with a focus on their architectures and the
benefits of using deep nets and discussed the gap between deep causal learning
and the needs of robotic intelligence
Treatment Learning Causal Transformer for Noisy Image Classification
Current top-notch deep learning (DL) based vision models are primarily based
on exploring and exploiting the inherent correlations between training data
samples and their associated labels. However, a known practical challenge is
their degraded performance against "noisy" data, induced by different
circumstances such as spurious correlations, irrelevant contexts, domain shift,
and adversarial attacks. In this work, we incorporate this binary information
of "existence of noise" as treatment into image classification tasks to improve
prediction accuracy by jointly estimating their treatment effects. Motivated
from causal variational inference, we propose a transformer-based architecture,
Treatment Learning Causal Transformer (TLT), that uses a latent generative
model to estimate robust feature representations from current observational
input for noise image classification. Depending on the estimated noise level
(modeled as a binary treatment factor), TLT assigns the corresponding inference
network trained by the designed causal loss for prediction. We also create new
noisy image datasets incorporating a wide range of noise factors (e.g., object
masking, style transfer, and adversarial perturbation) for performance
benchmarking. The superior performance of TLT in noisy image classification is
further validated by several refutation evaluation metrics. As a by-product,
TLT also improves visual salience methods for perceiving noisy images.Comment: Accepted to IEEE WACV 2023. The first version was finished in May
201
CausaLM: Causal Model Explanation Through Counterfactual Language Models
Understanding predictions made by deep neural networks is notoriously
difficult, but also crucial to their dissemination. As all ML-based methods,
they are as good as their training data, and can also capture unwanted biases.
While there are tools that can help understand whether such biases exist, they
do not distinguish between correlation and causation, and might be ill-suited
for text-based models and for reasoning about high level language concepts. A
key problem of estimating the causal effect of a concept of interest on a given
model is that this estimation requires the generation of counterfactual
examples, which is challenging with existing generation technology. To bridge
that gap, we propose CausaLM, a framework for producing causal model
explanations using counterfactual language representation models. Our approach
is based on fine-tuning of deep contextualized embedding models with auxiliary
adversarial tasks derived from the causal graph of the problem. Concretely, we
show that by carefully choosing auxiliary adversarial pre-training tasks,
language representation models such as BERT can effectively learn a
counterfactual representation for a given concept of interest, and be used to
estimate its true causal effect on model performance. A byproduct of our method
is a language representation model that is unaffected by the tested concept,
which can be useful in mitigating unwanted bias ingrained in the data.Comment: Our code and data are available at:
https://amirfeder.github.io/CausaLM/ Under review for the Computational
Linguistics journa
Deep Proxy Causal Learning and its Application to Confounded Bandit Policy Evaluation
Proxy causal learning (PCL) is a method for estimating the causal effect of treatments on outcomes in the presence of unobserved confounding, using proxies (structured side information) for the confounder. This is achieved via two-stage regression: in the first stage, we model relations among the treatment and proxies; in the second stage, we use this model to learn the effect of treatment on the outcome, given the context provided by the proxies. PCL guarantees recovery of the true causal effect, subject to identifiability conditions. We propose a novel method for PCL, the deep feature proxy variable method (DFPV), to address the case where the proxies, treatments, and outcomes are high-dimensional and have nonlinear complex relationships, as represented by deep neural network features. We show that DFPV outperforms recent state-of-the-art PCL methods on challenging synthetic benchmarks, including settings involving high dimensional image data. Furthermore, we show that PCL can be applied to off-policy evaluation for the confounded bandit problem, in which DFPV also exhibits competitive performance
- …