12 research outputs found

    Machine translation for institutional academic texts: Output quality, terminology translation and post-editor trust

    Get PDF
    The present work is a feasibility study on the application of Machine Translation (MT) to institutional academic texts, specifically course catalogues, for Italian-English and German-English. The first research question of this work focuses on the feasibility of profitably applying MT to such texts. Since the benefits of a good quality MT might be counteracted by preconceptions of translators towards the output, the second research question examines translator trainees' trust towards an MT output as compared to a human translation (HT). Training and test sets are created for both language combinations in the institutional academic domain. MT systems used are ModernMT and Google Translate. Overall evaluations of the output quality are carried out using automatic metrics. Results show that applying neural MT to institutional academic texts can be beneficial even when bilingual data are not available. When small amounts of sentence pairs become available, MT quality improves. Then, a gold standard data set with manual annotations of terminology (MAGMATic) is created and used for an evaluation of the output focused on terminology translation. The gold standard was publicly released to stimulate research on terminology assessment. The assessment proves that domain-adaptation improves the quality of term translation. To conclude, a method to measure trust in a post-editing task is proposed and results regarding translator trainees trust towards MT are outlined. All participants are asked to work on the same text. Half of them is told that it is an MT output to be post-edited, and the other half that it is a HT needing revision. Results prove that there is no statistically significant difference between post-editing and HT revision in terms of number of edits and temporal effort. Results thus suggest that a new generation of translators that received training on MT and post-editing is not influenced by preconceptions against MT

    Do translator trainees trust machine translation? An experiment on post-editing and revision

    Get PDF
    Despite the importance of trust in any work environment, this concept has rarely been investigated for MT. The present contribution aims at filling this gap by presenting a post-editing experiment carried out with translator trainees. An institutional academic text was translated from Italian into English. All participants worked on the same target text. Half of them were told that the text was a human translation needing revision, while the other half was told that it was an MT output to be postedited. Temporal and technical effort were measured based on words per second and HTER. Results were complemented with a manual analysis of a subset of the observations

    1949-2019: 70 anni di TA visti attraverso i dati utilizzati

    Get PDF
    La traduzione automatica (TA) ha subìto diversi mutamenti dal 1940 ad oggi. Come in molti altri campi dell’informatica e dell’intelligenza artificiale, si è passati da risorse sviluppate ad hoc manualmente ad approcci basati sempre di più su dati preesistenti. Il presente contributo si propone di offrire una panoramica delle diverse architetture di TA e dei dati da esse richiesti, partendo dagli approcci rule-based e arrivando alle architetture statistiche, examplebased e neurali. Ognuno di questi cambiamenti ha influito sulla tipologia di dati richiesti per la costruzione di motori di TA. Se i primi approcci non richiedevano l’utilizzo di frasi allineate, con la TA statistica è diventato imprescindibile poter fare affidamento su una grande quantità di dati paralleli. Oggi, grazie all’utilizzo delle reti neurali, è possibile ottenere una traduzione di buona qualità persino per combinazioni per cui non sono disponibili dati in entrambe le lingue

    MAGMATic: A Multi-domain Academic Gold Standard with Manual Annotation of Terminology for Machine Translation Evaluation

    Get PDF
    This paper presents MAGMATic (Multidomain Academic Gold Standard with Manual Annotation of Terminology), a novel Italian–English benchmark which allows MT evaluation focused on terminology translation. The data set comprises 2,056 parallel sentences extracted from institutional academic texts, namely course unit and degree program descriptions. This text type is particularly interesting since it contains terminology from multiple domains, e.g. education and different academic disciplines described in the texts. All terms in the English target side of the data set were manually identified and annotated with a domain label, for a total of 7,517 annotated terms. Due to their peculiar features, institutional academic texts represent an interesting test bed for MT. As a further contribution of this paper, we investigate the feasibility of exploiting MT for the translation of this type of documents. To this aim, we evaluate two stateof-the-art Neural MT systems on MAGMATic, focusing on their ability to translate domain-specific terminology

    La traduzione automatica e i language service provider: il caso di CTI

    No full text
    La presente tesi nasce da un tirocinio avanzato svolto presso l’azienda CTI (Communication Trend Italia) di Milano. Gli obiettivi dello stage erano la verifica della possibilità di inserire gli strumenti automatici nel flusso di lavoro dell’azienda e l'individuazione delle tipologie testuali e delle combinazioni linguistiche a cui essi sono applicabili. Il presente elaborato si propone di partire da un’analisi teorica dei vari aspetti legati all’utilizzo della TA, per poi descriverne l’applicazione pratica nei procedimenti che hanno portato alla creazione dei sistemi custom. Il capitolo 1 offre una panoramica teorica sul mondo della machine translation, che porta a delineare la modalità di utilizzo della TA ad oggi più diffusa: quella in cui la traduzione fornita dal sistema viene modificata tramite post-editing oppure il testo di partenza viene ritoccato attraverso il pre-editing per eliminare gli elementi più ostici. Nel capitolo 2, partendo da una panoramica relativa ai principali software di traduzione automatica in uso, si arriva alla descrizione di Microsoft Translator Hub, lo strumento scelto per lo sviluppo dei sistemi custom di CTI. Nel successivo passaggio, l’attenzione si concentra sull’ottenimento di sistemi customizzati. Un ampio approfondimento è dedicato ai metodi per reperire ed utilizzare le risorse. In seguito viene descritto il percorso che ha portato alla creazione e allo sviluppo dei due sistemi Bilanci IT_EN e Atto Costitutivo IT_EN in Microsoft Translator Hub. Infine, nel quarto ed ultimo capitolo gli output che i due sistemi forniscono vengono rivisti per individuarne le caratteristiche e analizzati tramite alcuni tool di valutazione automatica. Grazie alle informazioni raccolte vengono poi formulate alcune previsioni sul futuro uso dei sistemi presso l’azienda CTI

    How do {LSP}s compute {MT} discounts? Presenting a company{'}s pipeline and its use

    No full text
    In this paper we present a pipeline developed at Acolad to test a Machine Translation (MT) engine and compute the discount to be applied when its output is used in production. Our pipeline includes three main steps where quality and productivity are measured through automatic metrics, manual evaluation, and by keeping track of editing and temporal effort during a post-editing task. Thanks to this approach, it is possible to evaluate the output quality and compute an engine-specific discount. Our test pipeline tackles the complexity of transforming productivity measurements into discounts by comparing the outcome of each of the above-mentioned steps to an estimate of the average productivity of translation from scratch. The discount is obtained by subtracting the resulting coefficient from the per-word rate. After a description of the pipeline, the paper presents its application on four engines, discussing its results and showing that our method to estimate post-editing effort through manual evaluation seems to capture the actual productivity. The pipeline relies heavily on the work of professional post-editors, with the aim of creating a mutually beneficial cooperation between users and developers

    Assessing the Use of Terminology in Phrase-Based Statistical Machine Translation for Academic Course Catalogue Translation

    No full text
    In this contribution we describe an approach to evaluate the use of terminology in a phrase-based machine translation system to translate course unit descriptions from Italian into English. The genre is very prominent among those requiring translation by universities in European countries where English is not a native language. Two MT engines are trained on an in-domain bilingual corpus and a subset of the Europarl corpus, and one of them is enhanced adding a bilingual termbase to its training data. Overall systems\u2019 performance is assessed through the BLEU score, whereas the f-score is used to focus the evaluation on term translation. Furthermore, a manual analysis of the terms is carried out. Results suggest that in some cases - despite the simplistic approach implemented to inject terms into the MT system - the termbase was able to bias the word choice of the engine

    Assessing the Use of Terminology in Phrase-Based Statistical Machine Translation for Academic Course Catalogues Translation

    Get PDF
    In this contribution we describe an approach to evaluate the use of terminology in a phrase-based machine translation system to translate course unit descriptions from Italian into English. The genre is very prominent among those requiring translation by universities in European countries where English is not a native language. Two MT engines are trained on an in-domain bilingual corpus and a subset of the Europarl corpus, and one of them is enhanced adding a bilingual termbase to its training data. Overall systems’ performance is assessed through the BLEU score, whereas the f-score is used to focus the evaluation on term translation. Furthermore, a manual analysis of the terms is carried out. Results suggest that in some cases - despite the simplistic approach implemented to inject terms into the MT system - the termbase was able to bias the word choice of the engine

    Assessing the Use of Terminology in Phrase-Based Statistical Machine Translation for Academic Course Catalogue Translation

    No full text
    In this contribution we describe an approach to evaluate the use of terminology in a phrase-based machine translation system to translate course unit descriptions from Italian into English. The genre is very prominent among those requiring translation by universities in European countries where English is not a native language. Two MT engines are trained on an in-domain bilingual corpus and a subset of the Europarl corpus, and one of them is enhanced adding a bilingual termbase to its training data. Overall systems’ performance is assessed through the BLEU score, whereas the f-score is used to focus the evaluation on term translation. Furthermore, a manual analysis of the terms is carried out. Results suggest that in some cases - despite the simplistic approach implemented to inject terms into the MT system - the termbase was able to bias the word choice of the engine

    Do translator trainees trust machine translation? An experiment on post-editing and revision

    No full text
    Despite the importance of trust in any work environment, this concept has rarely been investigated for MT. The present contribution aims at filling this gap by presenting a post-editing experiment carried out with translator trainees. An institutional academic text was translated from Italian into English. All participants worked on the same target text. Half of them were told that the text was a human translation needing revision, while the other half was told that it was an MT output to be postedited. Temporal and technical effort were measured based on words per second and HTER. Results were complemented with a manual analysis of a subset of the observations
    corecore