36 research outputs found
Question-Answering with Grammatically-Interpretable Representations
We introduce an architecture, the Tensor Product Recurrent Network (TPRN). In
our application of TPRN, internal representations learned by end-to-end
optimization in a deep neural network performing a textual question-answering
(QA) task can be interpreted using basic concepts from linguistic theory. No
performance penalty need be paid for this increased interpretability: the
proposed model performs comparably to a state-of-the-art system on the SQuAD QA
task. The internal representation which is interpreted is a Tensor Product
Representation: for each input word, the model selects a symbol to encode the
word, and a role in which to place the symbol, and binds the two together. The
selection is via soft attention. The overall interpretation is built from
interpretations of the symbols, as recruited by the trained model, and
interpretations of the roles as used by the model. We find support for our
initial hypothesis that symbols can be interpreted as lexical-semantic word
meanings, while roles can be interpreted as approximations of grammatical roles
(or categories) such as subject, wh-word, determiner, etc. Fine-grained
analysis reveals specific correspondences between the learned roles and parts
of speech as assigned by a standard tagger (Toutanova et al. 2003), and finds
several discrepancies in the model's favor. In this sense, the model learns
significant aspects of grammar, after having been exposed solely to
linguistically unannotated text, questions, and answers: no prior linguistic
knowledge is given to the model. What is given is the means to build
representations using symbols and roles, with an inductive bias favoring use of
these in an approximately discrete manner
NaturalAdversaries: Can Naturalistic Adversaries Be as Effective as Artificial Adversaries?
While a substantial body of prior work has explored adversarial example
generation for natural language understanding tasks, these examples are often
unrealistic and diverge from the real-world data distributions. In this work,
we introduce a two-stage adversarial example generation framework
(NaturalAdversaries), for designing adversaries that are effective at fooling a
given classifier and demonstrate natural-looking failure cases that could
plausibly occur during in-the-wild deployment of the models.
At the first stage a token attribution method is used to summarize a given
classifier's behaviour as a function of the key tokens in the input. In the
second stage a generative model is conditioned on the key tokens from the first
stage. NaturalAdversaries is adaptable to both black-box and white-box
adversarial attacks based on the level of access to the model parameters. Our
results indicate these adversaries generalize across domains, and offer
insights for future research on improving robustness of neural text
classification models.Comment: Findings of EMNLP 202
An Empirical Study of Metrics to Measure Representational Harms in Pre-Trained Language Models
Large-scale Pre-Trained Language Models (PTLMs) capture knowledge from
massive human-written data which contains latent societal biases and toxic
contents. In this paper, we leverage the primary task of PTLMs, i.e., language
modeling, and propose a new metric to quantify manifested implicit
representational harms in PTLMs towards 13 marginalized demographics. Using
this metric, we conducted an empirical analysis of 24 widely used PTLMs. Our
analysis provides insights into the correlation between the proposed metric in
this work and other related metrics for representational harm. We observe that
our metric correlates with most of the gender-specific metrics in the
literature. Through extensive experiments, we explore the connections between
PTLMs architectures and representational harms across two dimensions: depth and
width of the networks. We found that prioritizing depth over width, mitigates
representational harms in some PTLMs. Our code and data can be found at
https://github.com/microsoft/SafeNLP.Comment: 17 pages
Mitigating Spurious Correlations in Multi-modal Models during Fine-tuning
Spurious correlations that degrade model generalization or lead the model to
be right for the wrong reasons are one of the main robustness concerns for
real-world deployments. However, mitigating these correlations during
pre-training for large-scale models can be costly and impractical, particularly
for those without access to high-performance computing resources. This paper
proposes a novel approach to address spurious correlations during fine-tuning
for a given domain of interest. With a focus on multi-modal models (e.g.,
CLIP), the proposed method leverages different modalities in these models to
detect and explicitly set apart spurious attributes from the affected class,
achieved through a multi-modal contrastive loss function that expresses
spurious relationships through language. Our experimental results and in-depth
visualizations on CLIP show that such an intervention can effectively i)
improve the model's accuracy when spurious attributes are not present, and ii)
directs the model's activation maps towards the actual class rather than the
spurious attribute when present. In particular, on the Waterbirds dataset, our
algorithm achieved a worst-group accuracy 23% higher than ERM on CLIP with a
ResNet-50 backbone, and 32% higher on CLIP with a ViT backbone, while
maintaining the same average accuracy as ERM
Diversity of Thought Improves Reasoning Abilities of LLMs
Large language models (LLMs) are documented to struggle in settings that
require complex reasoning. Nevertheless, instructing the model to break down
the problem into smaller reasoning steps, or ensembling various generations
through modifying decoding steps boosts performance. However, these methods
assume that the input prompt is fixed and expect the decoding strategies to
introduce the diversity needed for ensembling. In this work, we discuss how one
can create and leverage variations of the input prompt as a means of diversity
of thought. We propose a method that automatically improves prompt diversity by
soliciting feedback from the LLM to ideate approaches that are apt for the
problem. We then ensemble the diverse prompts in our method DIVSE (DIVerse
reasoning path Self-Ensemble) across multiple inference calls, or use diverse
approaches within a single inference call; we call the latter IDIV-SE (In-call
DIVerse reasoning path Self-Ensemble). Apart from our approaches outperforming
prior work, DIV-SE(in particular) advances state-of-the-art performance on the
challenging planning and graph coloring benchmarks. Our results improve the
Pareto frontier of the accuracy-cost trade-off
Improving Pre-trained Language Models' Generalization
The reusability of state-of-the-art Pre-trained Language Models (PLMs) is
often limited by their generalization problem, where their performance
drastically decreases when evaluated on examples that differ from the training
dataset, known as Out-of-Distribution (OOD)/unseen examples. This limitation
arises from PLMs' reliance on spurious correlations, which work well for
frequent example types but not for general examples. To address this issue, we
propose a training approach called Mask-tuning, which integrates Masked
Language Modeling (MLM) training objectives into the fine-tuning process to
enhance PLMs' generalization. Comprehensive experiments demonstrate that
Mask-tuning surpasses current state-of-the-art techniques and enhances PLMs'
generalization on OOD datasets while improving their performance on
in-distribution datasets. The findings suggest that Mask-tuning improves the
reusability of PLMs on unseen data, making them more practical and effective
for real-world applications