Visual Question Answering (VQA) models are prone to learn the shortcut
solution formed by dataset biases rather than the intended solution. To
evaluate the VQA models' reasoning ability beyond shortcut learning, the VQA-CP
v2 dataset introduces a distribution shift between the training and test set
given a question type. In this way, the model cannot use the training set
shortcut (from question type to answer) to perform well on the test set.
However, VQA-CP v2 only considers one type of shortcut and thus still cannot
guarantee that the model relies on the intended solution rather than a solution
specific to this shortcut. To overcome this limitation, we propose a new
dataset that considers varying types of shortcuts by constructing different
distribution shifts in multiple OOD test sets. In addition, we overcome the
three troubling practices in the use of VQA-CP v2, e.g., selecting models using
OOD test sets, and further standardize OOD evaluation procedure. Our benchmark
provides a more rigorous and comprehensive testbed for shortcut learning in
VQA. We benchmark recent methods and find that methods specifically designed
for particular shortcuts fail to simultaneously generalize to our varying OOD
test sets. We also systematically study the varying shortcuts and provide
several valuable findings, which may promote the exploration of shortcut
learning in VQA.Comment: Fingdings of EMNLP-202