1 research outputs found
Manual and Automatic Paraphrases for MT Evaluation
Paraphrasing of reference translations has been shown to improve the correlation with human judgements in automatic evaluation of
machine translation (MT) outputs. In this work, we present a new dataset for evaluating English-Czech translation based on automatic
paraphrases. We compare this dataset with an existing set of manually created paraphrases and find that even automatic paraphrases can
improve MT evaluation. We have also propose and evaluate several criteria for selecting suitable reference translations from a larger set