2,391 research outputs found
CausaLM: Causal Model Explanation Through Counterfactual Language Models
Understanding predictions made by deep neural networks is notoriously
difficult, but also crucial to their dissemination. As all ML-based methods,
they are as good as their training data, and can also capture unwanted biases.
While there are tools that can help understand whether such biases exist, they
do not distinguish between correlation and causation, and might be ill-suited
for text-based models and for reasoning about high level language concepts. A
key problem of estimating the causal effect of a concept of interest on a given
model is that this estimation requires the generation of counterfactual
examples, which is challenging with existing generation technology. To bridge
that gap, we propose CausaLM, a framework for producing causal model
explanations using counterfactual language representation models. Our approach
is based on fine-tuning of deep contextualized embedding models with auxiliary
adversarial tasks derived from the causal graph of the problem. Concretely, we
show that by carefully choosing auxiliary adversarial pre-training tasks,
language representation models such as BERT can effectively learn a
counterfactual representation for a given concept of interest, and be used to
estimate its true causal effect on model performance. A byproduct of our method
is a language representation model that is unaffected by the tested concept,
which can be useful in mitigating unwanted bias ingrained in the data.Comment: Our code and data are available at:
https://amirfeder.github.io/CausaLM/ Under review for the Computational
Linguistics journa
An Empirical Evaluation of Visual Question Answering for Novel Objects
We study the problem of answering questions about images in the harder
setting, where the test questions and corresponding images contain novel
objects, which were not queried about in the training data. Such setting is
inevitable in real world-owing to the heavy tailed distribution of the visual
categories, there would be some objects which would not be annotated in the
train set. We show that the performance of two popular existing methods drop
significantly (up to 28%) when evaluated on novel objects cf. known objects. We
propose methods which use large existing external corpora of (i) unlabeled
text, i.e. books, and (ii) images tagged with classes, to achieve novel object
based visual question answering. We do systematic empirical studies, for both
an oracle case where the novel objects are known textually, as well as a fully
automatic case without any explicit knowledge of the novel objects, but with
the minimal assumption that the novel objects are semantically related to the
existing objects in training. The proposed methods for novel object based
visual question answering are modular and can potentially be used with many
visual question answering architectures. We show consistent improvements with
the two popular architectures and give qualitative analysis of the cases where
the model does well and of those where it fails to bring improvements.Comment: 11 pages, 4 figures, accepted in CVPR 2017 (poster
Discovery and recognition of motion primitives in human activities
We present a novel framework for the automatic discovery and recognition of
motion primitives in videos of human activities. Given the 3D pose of a human
in a video, human motion primitives are discovered by optimizing the `motion
flux', a quantity which captures the motion variation of a group of skeletal
joints. A normalization of the primitives is proposed in order to make them
invariant with respect to a subject anatomical variations and data sampling
rate. The discovered primitives are unknown and unlabeled and are
unsupervisedly collected into classes via a hierarchical non-parametric Bayes
mixture model. Once classes are determined and labeled they are further
analyzed for establishing models for recognizing discovered primitives. Each
primitive model is defined by a set of learned parameters.
Given new video data and given the estimated pose of the subject appearing on
the video, the motion is segmented into primitives, which are recognized with a
probability given according to the parameters of the learned models.
Using our framework we build a publicly available dataset of human motion
primitives, using sequences taken from well-known motion capture datasets. We
expect that our framework, by providing an objective way for discovering and
categorizing human motion, will be a useful tool in numerous research fields
including video analysis, human inspired motion generation, learning by
demonstration, intuitive human-robot interaction, and human behavior analysis
- …