While large pre-trained language models are powerful, their predictions often
lack logical consistency across test inputs. For example, a state-of-the-art
Macaw question-answering (QA) model answers 'Yes' to 'Is a sparrow a bird?' and
'Does a bird have feet?' but answers 'No' to 'Does a sparrow have feet?'. To
address this failure mode, we propose a framework, Consistency Correction
through Relation Detection, or ConCoRD, for boosting the consistency and
accuracy of pre-trained NLP models using pre-trained natural language inference
(NLI) models without fine-tuning or re-training. Given a batch of test inputs,
ConCoRD samples several candidate outputs for each input and instantiates a
factor graph that accounts for both the model's belief about the likelihood of
each answer choice in isolation and the NLI model's beliefs about pair-wise
answer choice compatibility. We show that a weighted MaxSAT solver can
efficiently compute high-quality answer choices under this factor graph,
improving over the raw model's predictions. Our experiments demonstrate that
ConCoRD consistently boosts accuracy and consistency of off-the-shelf
closed-book QA and VQA models using off-the-shelf NLI models, notably
increasing accuracy of LXMERT on ConVQA by 5% absolute. See
https://ericmitchell.ai/emnlp-2022-concord/ for code and data.Comment: 16 pages. EMNLP 2022 Camera Ready. See
https://ericmitchell.ai/emnlp-2022-concord/ for code and dat