Recent studies have revealed that reading comprehension (RC) systems learn to
exploit annotation artifacts and other biases in current datasets. This
prevents the community from reliably measuring the progress of RC systems. To
address this issue, we introduce R4C, a new task for evaluating RC systems'
internal reasoning. R4C requires giving not only answers but also derivations:
explanations that justify predicted answers. We present a reliable,
crowdsourced framework for scalably annotating RC datasets with derivations. We
create and publicly release the R4C dataset, the first, quality-assured dataset
consisting of 4.6k questions, each of which is annotated with 3 reference
derivations (i.e. 13.8k derivations). Experiments show that our automatic
evaluation metrics using multiple reference derivations are reliable, and that
R4C assesses different skills from an existing benchmark.Comment: Accepted by ACL2020. See https://naoya-i.github.io/r4c/ for more
informatio