CORE
🇺🇦
make metadata, not war
Services
Services overview
Explore all CORE services
Access to raw data
API
Dataset
FastSync
Content discovery
Recommender
Discovery
OAI identifiers
OAI Resolver
Managing content
Dashboard
Bespoke contracts
Consultancy services
Support us
Support us
Membership
Sponsorship
Community governance
Advisory Board
Board of supporters
Research network
About
About us
Our mission
Team
Blog
FAQs
Contact us
Robustifying multi-hop question answering through pseudo-evidentiality training
Authors
Sang-Eun Han
Seung-Won Hwang
Dohyeon Lee
Kyungjae Lee
Publication date
1 August 2021
Publisher
'Association for Computational Linguistics (ACL)'
Abstract
© 2021 Association for Computational LinguisticsThis paper studies the bias problem of multihop question answering models, of answering correctly without correct reasoning. One way to robustify these models is by supervising to not only answer right, but also with right reasoning chains. An existing direction is to annotate reasoning chains to train models, requiring expensive additional annotations. In contrast, we propose a new approach to learn evidentiality, deciding whether the answer prediction is supported by correct evidences, without such annotations. Instead, we compare counterfactual changes in answer confidence with and without evidence sentences, to generate “pseudo-evidentiality” annotations. We validate our proposed model on an original set and challenge set in HotpotQA, showing that our method is accurate and robust in multi-hop reasoning.N
Similar works
Full text
Available Versions
SNU Open Repository and Archive
See this paper in CORE
Go to the repository landing page
Download from data provider
oai:s-space.snu.ac.kr:10371/18...
Last time updated on 06/07/2022