2 research outputs found

    Measuring the adequacy of cross-lingual paraphrases in a Machine Translation setting

    No full text
    International audienceFollowing the growing trend in the semantics community towards models adapted to specific applications, the SemEval-2 Cross-Lingual Lexical Substitution and Word Sense Disambiguation tasks address the disambiguation needs of Machine Translation (MT). The experiments conducted in this study aim at assessing whether the proposed evaluation protocol and methodology provide a fair estimate of the adequacy of cross-lingual predictions in translations. For this purpose, the gold SemEval paraphrases are fed into a state-of-the-art MT system and the obtained translations are compared to paraphrase quality judgments based on the source context. The results show the strong dependence of cross-lingual paraphrase adequacy on the translation context and cast doubt on the contribution that systems performing well in existing evaluation schemes would have on MT. These empirical findings highlight the importance of complementing the current evaluation schemes with translation information to allow a more accurate estimation of the systems impact on end-to-end applications
    corecore