12 research outputs found

    Findings of the 2018 Conference on Machine Translation (WMT18)

    Get PDF
    This paper presents the results of the premier shared task organized alongside the Confer- ence on Machine Translation (WMT) 2018. Participants were asked to build machine translation systems for any of 7 language pairs in both directions, to be evaluated on a test set of news stories. The main metric for this task is human judgment of translation quality. This year, we also opened up the task to additional test suites to probe specific aspects of transla- tion

    Findings of the WMT 2018 Shared Task on Parallel Corpus Filtering

    Get PDF

    "A Little is Enough": Few-Shot Quality Estimation based Corpus Filtering improves Machine Translation

    Full text link
    Quality Estimation (QE) is the task of evaluating the quality of a translation when reference translation is not available. The goal of QE aligns with the task of corpus filtering, where we assign the quality score to the sentence pairs present in the pseudo-parallel corpus. We propose a Quality Estimation based Filtering approach to extract high-quality parallel data from the pseudo-parallel corpus. To the best of our knowledge, this is a novel adaptation of the QE framework to extract quality parallel corpus from the pseudo-parallel corpus. By training with this filtered corpus, we observe an improvement in the Machine Translation (MT) system's performance by up to 1.8 BLEU points, for English-Marathi, Chinese-English, and Hindi-Bengali language pairs, over the baseline model. The baseline model is the one that is trained on the whole pseudo-parallel corpus. Our Few-shot QE model transfer learned from the English-Marathi QE model and fine-tuned on only 500 Hindi-Bengali training instances, shows an improvement of up to 0.6 BLEU points for Hindi-Bengali language pair, compared to the baseline model. This demonstrates the promise of transfer learning in the setting under discussion. QE systems typically require in the order of (7K-25K) of training data. Our Hindi-Bengali QE is trained on only 500 instances of training that is 1/40th of the normal requirement and achieves comparable performance. All the scripts and datasets utilized in this study will be publicly available

    Findings of the WMT 2018 shared task on quality estimation

    Get PDF
    © 2018 The Authors. Published by Association for Computational Linguistics. This is an open access article available under a Creative Commons licence. The published version can be accessed at the following link on the publisher’s website: http://dx.doi.org/10.18653/v1/W18-6451We report the results of the WMT18 shared task on Quality Estimation, i.e. the task of predicting the quality of the output of machine translation systems at various granularity levels: word, phrase, sentence and document. This year we include four language pairs, three text domains, and translations produced by both statistical and neural machine translation systems. Participating teams from ten institutions submitted a variety of systems to different task variants and language pairs.The data and annotations collected for Tasks 1, 2 and 3 was supported by the EC H2020 QT21 project (grant agreement no. 645452). The shared task organisation was also supported by the QT21 project, national funds through Fundacao para a Ciencia e Tecnologia (FCT), with references UID/CEC/50021/2013 and UID/EEA/50008/2013, and by the European Research Council (ERC StG DeepSPIN 758969). We would also like to thank Julie Beliao and the Unbabel Quality Team for coordinating the annotation of the dataset used in Task 4

    Mismatching-Aware Unsupervised Translation Quality Estimation For Low-Resource Languages

    Full text link
    Translation Quality Estimation (QE) is the task of predicting the quality of machine translation (MT) output without any reference. This task has gained increasing attention as an important component in the practical applications of MT. In this paper, we first propose XLMRScore, which is a cross-lingual counterpart of BERTScore computed via the XLM-RoBERTa (XLMR) model. This metric can be used as a simple unsupervised QE method, while employing it results in two issues: firstly, the untranslated tokens leading to unexpectedly high translation scores, and secondly, the issue of mismatching errors between source and hypothesis tokens when applying the greedy matching in XLMRScore. To mitigate these issues, we suggest replacing untranslated words with the unknown token and the cross-lingual alignment of the pre-trained model to represent aligned words closer to each other, respectively. We evaluate the proposed method on four low-resource language pairs of WMT21 QE shared task, as well as a new English-Farsi test dataset introduced in this paper. Experiments show that our method could get comparable results with the supervised baseline for two zero-shot scenarios, i.e., with less than 0.01 difference in Pearson correlation, while outperforming unsupervised rivals in all the low-resource language pairs for above 8%, on average.Comment: Submitted to Language Resources and Evaluatio

    Datuen Ustiapena Itzulpen Automatikorako

    Get PDF
    180Datuetan oinarritutako itzulpen automatikoa, azken urteotan gailendutako paradigma da. Sistema hauek datuen bidez elikatzen dira entrenamendu prozesu batean. Abantaila nagusia itzulpen berriak egin ahal izateko jakintza automatikoki erauzten dutela da, baina era berean, jakintza orokortzeko ahalmena entrenamendurako corpuseko adibideengatik mugatuta dago.Tesi honen helburu nagusia corpusen kalitatea hobetzea da hiru alderdi landuz: corpus tamaina handituz, corpusen datuak domeinura egokituz eta datu multzo zaratatsuak iragaziz. Horretarako, lau ikerlerrotan egindako ikerketak aurkezten dira. Lehendabizi, dokumentuen lerrokatzean, bi hizkuntza ezberdinetako dokumentuak lerrotzen dira. Bigarren pausu batean, esaldien lerrokatzean, aurreko pausuko dokumentu pareetako esaldi paraleloak identi katzen dira. Corpusa domeinura egokitzeko, datuen aukeraketaren bidez domeinuz kanpoko corpus handiagoetan domeinuko datu gehiago bilatzen dira. Azkenik, esaldi paraleloen iragazpenarekin entrenamendurako kaltegarriak diren itzulpenak baztertzen dira

    Low-Resource Unsupervised NMT:Diagnosing the Problem and Providing a Linguistically Motivated Solution

    Get PDF
    Unsupervised Machine Translation hasbeen advancing our ability to translatewithout parallel data, but state-of-the-artmethods assume an abundance of mono-lingual data. This paper investigates thescenario where monolingual data is lim-ited as well, finding that current unsuper-vised methods suffer in performance un-der this stricter setting. We find that theperformance loss originates from the poorquality of the pretrained monolingual em-beddings, and we propose using linguis-tic information in the embedding train-ing scheme. To support this, we look attwo linguistic features that may help im-prove alignment quality: dependency in-formation and sub-word information. Us-ing dependency-based embeddings resultsin a complementary word representationwhich offers a boost in performance ofaround 1.5 BLEU points compared to stan-dardWORD2VECwhen monolingual datais limited to 1 million sentences per lan-guage. We also find that the inclusion ofsub-word information is crucial to improv-ing the quality of the embedding
    corecore