245 research outputs found
On the Value of Out-of-Distribution Testing: An Example of Goodhart's Law
Out-of-distribution (OOD) testing is increasingly popular for evaluating a
machine learning system's ability to generalize beyond the biases of a training
set. OOD benchmarks are designed to present a different joint distribution of
data and labels between training and test time. VQA-CP has become the standard
OOD benchmark for visual question answering, but we discovered three troubling
practices in its current use. First, most published methods rely on explicit
knowledge of the construction of the OOD splits. They often rely on
``inverting'' the distribution of labels, e.g. answering mostly 'yes' when the
common training answer is 'no'. Second, the OOD test set is used for model
selection. Third, a model's in-domain performance is assessed after retraining
it on in-domain splits (VQA v2) that exhibit a more balanced distribution of
labels. These three practices defeat the objective of evaluating
generalization, and put into question the value of methods specifically
designed for this dataset. We show that embarrassingly-simple methods,
including one that generates answers at random, surpass the state of the art on
some question types. We provide short- and long-term solutions to avoid these
pitfalls and realize the benefits of OOD evaluation
Look at the First Sentence: Position Bias in Question Answering
Many extractive question answering models are trained to predict start and
end positions of answers. The choice of predicting answers as positions is
mainly due to its simplicity and effectiveness. In this study, we hypothesize
that when the distribution of the answer positions is highly skewed in the
training set (e.g., answers lie only in the k-th sentence of each passage), QA
models predicting answers as positions can learn spurious positional cues and
fail to give answers in different positions. We first illustrate this position
bias in popular extractive QA models such as BiDAF and BERT and thoroughly
examine how position bias propagates through each layer of BERT. To safely
deliver position information without position bias, we train models with
various de-biasing methods including entropy regularization and bias
ensembling. Among them, we found that using the prior distribution of answer
positions as a bias model is very effective at reducing position bias,
recovering the performance of BERT from 37.48% to 81.64% when trained on a
biased SQuAD dataset.Comment: 13 pages, EMNLP 202
A negative case analysis of visual grounding methods for VQA
Existing Visual Question Answering (VQA) methods tend to exploit dataset
biases and spurious statistical correlations, instead of producing right
answers for the right reasons. To address this issue, recent bias mitigation
methods for VQA propose to incorporate visual cues (e.g., human attention maps)
to better ground the VQA models, showcasing impressive gains. However, we show
that the performance improvements are not a result of improved visual
grounding, but a regularization effect which prevents over-fitting to
linguistic priors. For instance, we find that it is not actually necessary to
provide proper, human-based cues; random, insensible cues also result in
similar improvements. Based on this observation, we propose a simpler
regularization scheme that does not require any external annotations and yet
achieves near state-of-the-art performance on VQA-CPv2
- …