72 research outputs found

    MQM: Un marco para declarar y describir métricas de calidad de la traducción

    Get PDF
    En els últims anys l'avaluació de la qualitat de la traducció s'ha convertit en un tema rellevant i a la vegada que, de vegades, polèmic. La perspectiva de la indústria sobre la qualitat està altament fragmentada, en part perquè diferents tipus de projectes de traducció requereixen mètodes molt diferents d'avaluació. A més, els mètodes d'avaluació de la qualitat de les traduccions humanes i de les traduccions elaborades amb traducció automàtica (TA) són d'índole diferent. La manca de claredat provoca incertesa sobre si una traducció compleix amb les necessitats del seu promotor o l'usuari final, i deixa als proveïdors amb dubtes sobre el que els clients volen i necessiten. Com a resposta a aquest fet, el projecte QTLaunchPad, finançat per la Unió Europea, ha desenvolupat el marc denominat Multidimensional Quality Metrics (MQM), un sistema obert i ampliable per declarar i descriure les mètriques sobre qualitat en traducció utilitzant un vocabulari compartit de "classes de problemes”.In recent years translation quality evaluation has emerged as a major, and at times contentious, topic. Despite a focus on systematizing the evaluation of translation quality, the industry landscape is still highly fragmented, in part because different kinds of translation projects require very different evaluations of quality. In addition, human and machine translation (MT) quality evaluation methods have been fundamentally different in kind, preventing comparison of MT with human translation. The lack of clarity has contributed to an environment in which requesters of translation often cannot be certain whether or not a translation meets their needs or the needs of end users and where providers are unclear about what requesters need and want. In response the EU-funded QTLaunchPad project has developer the Multidimensional Quality Metrics (MQM) framework, an open an extensible system for declaring and describing translation quality metrics using a shared vocabulary of “issue types”.En los últimos años la evaluación de la calidad de la traducción se ha convertido en un tema relevante a la par que, en ocasiones, polémico. La perspectiva de la industria sobre la calidad está altamente fragmentada, en parte porque diferentes tipos de proyectos de traducción requieren métodos muy diferentes de evaluación. Además, los métodos de evaluación de la calidad de las traducciones humanas y de las traducciones elaboradas con traducción automática (TA) son de índole diferente. La falta de claridad provoca incerteza sobre si una traducción cumple con las necesidades de su promotor o su usuario final, y deja a los proveedores  con dudas sobre lo que los clientes quieren y necesitan. Como respuesta a este hecho, el proyecto QTLaunchPad, financiado por la Unión Europea, ha desarrollado el marco denominado Multidimensional Quality Metrics (MQM), un sistema abierto y ampliable para declarar y describir las métricas sobre calidad en traducción utilizando un vocabulario compartido de “clases de problemas”

    Using MT-ComparEval

    Get PDF
    The paper showcases the MT-ComparEval tool for qualitative evaluation of machine translation (MT). MT-ComparEval is an opensource tool that has been designed in order to help MT developers by providing a graphical user interface that allows the comparison and evaluation of different MT engines/experiments and settings

    Machine Translation: Phrase-Based, Rule-Based and Neural Approaches with Linguistic Evaluation

    Get PDF
    AbstractIn this article we present a novel linguistically driven evaluation method and apply it to the main approaches of Machine Translation (Rule-based, Phrase-based, Neural) to gain insights into their strengths and weaknesses in much more detail than provided by current evaluation schemes. Translating between two languages requires substantial modelling of knowledge about the two languages, about translation, and about the world. Using English-German IT-domain translation as a case-study, we also enhance the Phrase-based system by exploiting parallel treebanks for syntax-aware phrase extraction and by interfacing with Linked Open Data (LOD) for extracting named entity translations in a post decoding framework.</jats:p

    Tools and Guidelines for Principled Machine Translation Development

    Get PDF
    This work addresses the need to aid Machine Translation (MT) development cycles with a complete workflow of MT evaluation methods. Our aim is to assess, compare and improve MT system variants. We hereby report on novel tools and practices that support various measures, developed in order to support a principled and informed approach of MT developmen

    Die intelligente ADAMAAS-Datenbrille – Chancen und Risiken des Einsatzes mobiler Assistiver Technologien für die Inklusion

    Get PDF
    Essig K, Strenge B, Schack T. Die intelligente ADAMAAS-Datenbrille – Chancen und Risiken des Einsatzes mobiler Assistiver Technologien für die Inklusion. In: Burchardt A, Uszkoreit H, eds. IT für soziale Inklusion. Digitalisierung – Künstliche Intelligenz – Zukunft für alle . Berlin, Boston: De Gruyter; 2018: 33-40

    Translation quality and productivity: a study on rich morphology languages

    Get PDF
    © 2017 The Authors. Published by Asia-Pacific Association for Machine Translation. This is an open access article available under a Creative Commons licence. The published version can be accessed at the following link on the publisher’s website: http://aamt.info/app-def/S-102/mtsummit/2017/wp-content/uploads/sites/2/2017/09/MTSummitXVI_ResearchTrack.pdfSpecia, L., Blain, F., Harris, K., Burchardt, A. et al. (2017) Translation quality and productivity: a study on rich morphology languages. In, Machine Translation Summit XVI, Vol 1. MT Research Track, Kurohashi, S., and Fung, P., Nagoya, Aichi, Japan: Asia-Pacific Association for Machine Translation, pp. 55-71.This work was supported by the QT21 project (H2020 No. 645452)

    The TaraXŰ Corpus of Human-Annotated Machine Translations

    Get PDF
    Abstract Human translators are the key to evaluating machine translation (MT) quality and also to addressing the so far unanswered question when and how to use MT in professional translation workflows. This paper describes the corpus developed as a result of a detailed large scale human evaluation consisting of three tightly connected tasks: ranking, error classification and post-editing

    Machine translation quality in an audiovisual context

    Get PDF
    The volume of Audiovisual Translation (AVT) is increasing to meet the rising demand for data that needs to be accessible around the world. Machine Translation (MT) is one of the most innovative technologies to be deployed in the field of translation, but it is still too early to predict how it can support the creativity and productivity of professional translators in the future. Currently, MT is more widely used in (non-AV) text translation than in AVT. In this article, we discuss MT technology and demonstrate why its use in AVT scenarios is particularly challenging. We also present some potentially useful methods and tools for measuring MT quality that have been developed primarily for text translation. The ultimate objective is to bridge the gap between the tech-savvy AVT community, on the one hand, and researchers and developers in the field of high-quality MT, on the other
    corecore