1,130 research outputs found
CausaLM: Causal Model Explanation Through Counterfactual Language Models
Understanding predictions made by deep neural networks is notoriously
difficult, but also crucial to their dissemination. As all ML-based methods,
they are as good as their training data, and can also capture unwanted biases.
While there are tools that can help understand whether such biases exist, they
do not distinguish between correlation and causation, and might be ill-suited
for text-based models and for reasoning about high level language concepts. A
key problem of estimating the causal effect of a concept of interest on a given
model is that this estimation requires the generation of counterfactual
examples, which is challenging with existing generation technology. To bridge
that gap, we propose CausaLM, a framework for producing causal model
explanations using counterfactual language representation models. Our approach
is based on fine-tuning of deep contextualized embedding models with auxiliary
adversarial tasks derived from the causal graph of the problem. Concretely, we
show that by carefully choosing auxiliary adversarial pre-training tasks,
language representation models such as BERT can effectively learn a
counterfactual representation for a given concept of interest, and be used to
estimate its true causal effect on model performance. A byproduct of our method
is a language representation model that is unaffected by the tested concept,
which can be useful in mitigating unwanted bias ingrained in the data.Comment: Our code and data are available at:
https://amirfeder.github.io/CausaLM/ Under review for the Computational
Linguistics journa
Vec2Gloss: definition modeling leveraging contextualized vectors with Wordnet gloss
Contextualized embeddings are proven to be powerful tools in multiple NLP
tasks. Nonetheless, challenges regarding their interpretability and capability
to represent lexical semantics still remain. In this paper, we propose that the
task of definition modeling, which aims to generate the human-readable
definition of the word, provides a route to evaluate or understand the high
dimensional semantic vectors. We propose a `Vec2Gloss' model, which produces
the gloss from the target word's contextualized embeddings. The generated
glosses of this study are made possible by the systematic gloss patterns
provided by Chinese Wordnet. We devise two dependency indices to measure the
semantic and contextual dependency, which are used to analyze the generated
texts in gloss and token levels. Our results indicate that the proposed
`Vec2Gloss' model opens a new perspective to the lexical-semantic applications
of contextualized embeddings
Linguistically inspired roadmap for building biologically reliable protein language models
Deep neural-network-based language models (LMs) are increasingly applied to
large-scale protein sequence data to predict protein function. However, being
largely black-box models and thus challenging to interpret, current protein LM
approaches do not contribute to a fundamental understanding of
sequence-function mappings, hindering rule-based biotherapeutic drug
development. We argue that guidance drawn from linguistics, a field specialized
in analytical rule extraction from natural language data, can aid with building
more interpretable protein LMs that are more likely to learn relevant
domain-specific rules. Differences between protein sequence data and linguistic
sequence data require the integration of more domain-specific knowledge in
protein LMs compared to natural language LMs. Here, we provide a
linguistics-based roadmap for protein LM pipeline choices with regard to
training data, tokenization, token embedding, sequence embedding, and model
interpretation. Incorporating linguistic ideas into protein LMs enables the
development of next-generation interpretable machine-learning models with the
potential of uncovering the biological mechanisms underlying sequence-function
relationships.Comment: 27 pages, 4 figure
- …