3,309 research outputs found
Tokenization Consistency Matters for Generative Models on Extractive NLP Tasks
Generative models have been widely applied to solve extractive tasks, where
parts of the input is extracted to form the desired output, and achieved
significant success. For example, in extractive question answering (QA),
generative models have constantly yielded state-of-the-art results. In this
work, we identify the issue of tokenization inconsistency that is commonly
neglected in training these models. This issue damages the extractive nature of
these tasks after the input and output are tokenized inconsistently by the
tokenizer, and thus leads to performance drop as well as hallucination. We
propose a simple yet effective fix to this issue and conduct a case study on
extractive QA. We show that, with consistent tokenization, the model performs
better in both in-domain and out-of-domain datasets, with a notable average of
+1.7 F2 gain when a BART model is trained on SQuAD and evaluated on 8 QA
datasets. Further, the model converges faster, and becomes less likely to
generate out-of-context answers. With these findings, we would like to call for
more attention on how tokenization should be done when solving extractive tasks
and recommend applying consistent tokenization during training.Comment: Findings of EMNLP202
- …