3,391 research outputs found
Comparing Intrinsic and Extrinsic Evaluation of MT Output in a Dialogue System
We present an exploratory study to assess machine translationoutput for application in a dialogue system using an intrinsicand an extrinsic evaluation method. For the intrinsicevaluation we developed an annotation scheme to determinethe quality of the translated utterances in isolation. For theextrinsic evaluation we employed theWizard of Oz techniqueto assess the quality of the translations in the context of a dialogueapplication. Results differ and we discuss the possiblereasons for this outcome
Survey of the State of the Art in Natural Language Generation: Core tasks, applications and evaluation
This paper surveys the current state of the art in Natural Language
Generation (NLG), defined as the task of generating text or speech from
non-linguistic input. A survey of NLG is timely in view of the changes that the
field has undergone over the past decade or so, especially in relation to new
(usually data-driven) methods, as well as new applications of NLG technology.
This survey therefore aims to (a) give an up-to-date synthesis of research on
the core tasks in NLG and the architectures adopted in which such tasks are
organised; (b) highlight a number of relatively recent research topics that
have arisen partly as a result of growing synergies between NLG and other areas
of artificial intelligence; (c) draw attention to the challenges in NLG
evaluation, relating them to similar challenges faced in other areas of Natural
Language Processing, with an emphasis on different evaluation methods and the
relationships between them.Comment: Published in Journal of AI Research (JAIR), volume 61, pp 75-170. 118
pages, 8 figures, 1 tabl
RankME: Reliable Human Ratings for Natural Language Generation
Human evaluation for natural language generation (NLG) often suffers from
inconsistent user ratings. While previous research tends to attribute this
problem to individual user preferences, we show that the quality of human
judgements can also be improved by experimental design. We present a novel
rank-based magnitude estimation method (RankME), which combines the use of
continuous scales and relative assessments. We show that RankME significantly
improves the reliability and consistency of human ratings compared to
traditional evaluation methods. In addition, we show that it is possible to
evaluate NLG systems according to multiple, distinct criteria, which is
important for error analysis. Finally, we demonstrate that RankME, in
combination with Bayesian estimation of system quality, is a cost-effective
alternative for ranking multiple NLG systems.Comment: Accepted to NAACL 2018 (The 2018 Conference of the North American
Chapter of the Association for Computational Linguistics
Producing Monolingual and ParallelWeb Corpora at the Same Time – SpiderLing and Bitextor’s Love Affair
This paper presents an approach for building large monolingual corpora and, at the same time, extracting parallel data by crawling the top-level domain of a given language of interest. For gathering linguistically relevant data from top-level domains we use the SpiderLing crawler, modified to crawl data written in multiple languages. The output of this process is then fed to Bitextor, a tool for harvesting parallel data from a collection of documents. We call the system combining these two tools Spidextor, a blend of the names of its two crucial parts. We evaluate the described approach intrinsically by measuring the accuracy of the extracted bitexts from the Croatian top-level domain .hr and the Slovene top-level domain .si, and extrinsically on the English–Croatian language pair by comparing an SMT system built from the crawled data with third-party systems. We finally present parallel datasets collected with our approach for the English–Croatian, English–Finnish, English–Serbian and English–Slovene language pairs.This research is supported by the European Union Seventh Framework Programme FP7/2007-2013 under grant agreement PIAP-GA-2012-324414 (AbuMaTran)
Extrinsic Evaluation of Machine Translation Metrics
Automatic machine translation (MT) metrics are widely used to distinguish the
translation qualities of machine translation systems across relatively large
test sets (system-level evaluation). However, it is unclear if automatic
metrics are reliable at distinguishing good translations from bad translations
at the sentence level (segment-level evaluation). In this paper, we investigate
how useful MT metrics are at detecting the success of a machine translation
component when placed in a larger platform with a downstream task. We evaluate
the segment-level performance of the most widely used MT metrics (chrF, COMET,
BERTScore, etc.) on three downstream cross-lingual tasks (dialogue state
tracking, question answering, and semantic parsing). For each task, we only
have access to a monolingual task-specific model. We calculate the correlation
between the metric's ability to predict a good/bad translation with the
success/failure on the final task for the Translate-Test setup. Our experiments
demonstrate that all metrics exhibit negligible correlation with the extrinsic
evaluation of the downstream outcomes. We also find that the scores provided by
neural metrics are not interpretable mostly because of undefined ranges. We
synthesise our analysis into recommendations for future MT metrics to produce
labels rather than scores for more informative interaction between machine
translation and multilingual language understanding.Comment: ACL 2023 Camera Read
- …