3 research outputs found
NatLogAttack: A Framework for Attacking Natural Language Inference Models with Natural Logic
Reasoning has been a central topic in artificial intelligence from the
beginning. The recent progress made on distributed representation and neural
networks continues to improve the state-of-the-art performance of natural
language inference. However, it remains an open question whether the models
perform real reasoning to reach their conclusions or rely on spurious
correlations. Adversarial attacks have proven to be an important tool to help
evaluate the Achilles' heel of the victim models. In this study, we explore the
fundamental problem of developing attack models based on logic formalism. We
propose NatLogAttack to perform systematic attacks centring around natural
logic, a classical logic formalism that is traceable back to Aristotle's
syllogism and has been closely developed for natural language inference. The
proposed framework renders both label-preserving and label-flipping attacks. We
show that compared to the existing attack models, NatLogAttack generates better
adversarial examples with fewer visits to the victim models. The victim models
are found to be more vulnerable under the label-flipping setting. NatLogAttack
provides a tool to probe the existing and future NLI models' capacity from a
key viewpoint and we hope more logic-based attacks will be further explored for
understanding the desired property of reasoning.Comment: Published as a conference paper at ACL 202