18,939 research outputs found

    Exploring Prediction Uncertainty in Machine Translation Quality Estimation

    Full text link
    Machine Translation Quality Estimation is a notoriously difficult task, which lessens its usefulness in real-world translation environments. Such scenarios can be improved if quality predictions are accompanied by a measure of uncertainty. However, models in this task are traditionally evaluated only in terms of point estimate metrics, which do not take prediction uncertainty into account. We investigate probabilistic methods for Quality Estimation that can provide well-calibrated uncertainty estimates and evaluate them in terms of their full posterior predictive distributions. We also show how this posterior information can be useful in an asymmetric risk scenario, which aims to capture typical situations in translation workflows.Comment: Proceedings of CoNLL 201

    Automatic Quality Estimation for ASR System Combination

    Get PDF
    Recognizer Output Voting Error Reduction (ROVER) has been widely used for system combination in automatic speech recognition (ASR). In order to select the most appropriate words to insert at each position in the output transcriptions, some ROVER extensions rely on critical information such as confidence scores and other ASR decoder features. This information, which is not always available, highly depends on the decoding process and sometimes tends to over estimate the real quality of the recognized words. In this paper we propose a novel variant of ROVER that takes advantage of ASR quality estimation (QE) for ranking the transcriptions at "segment level" instead of: i) relying on confidence scores, or ii) feeding ROVER with randomly ordered hypotheses. We first introduce an effective set of features to compensate for the absence of ASR decoder information. Then, we apply QE techniques to perform accurate hypothesis ranking at segment-level before starting the fusion process. The evaluation is carried out on two different tasks, in which we respectively combine hypotheses coming from independent ASR systems and multi-microphone recordings. In both tasks, it is assumed that the ASR decoder information is not available. The proposed approach significantly outperforms standard ROVER and it is competitive with two strong oracles that e xploit prior knowledge about the real quality of the hypotheses to be combined. Compared to standard ROVER, the abs olute WER improvements in the two evaluation scenarios range from 0.5% to 7.3%

    Low-Resource Unsupervised NMT:Diagnosing the Problem and Providing a Linguistically Motivated Solution

    Get PDF
    Unsupervised Machine Translation hasbeen advancing our ability to translatewithout parallel data, but state-of-the-artmethods assume an abundance of mono-lingual data. This paper investigates thescenario where monolingual data is lim-ited as well, finding that current unsuper-vised methods suffer in performance un-der this stricter setting. We find that theperformance loss originates from the poorquality of the pretrained monolingual em-beddings, and we propose using linguis-tic information in the embedding train-ing scheme. To support this, we look attwo linguistic features that may help im-prove alignment quality: dependency in-formation and sub-word information. Us-ing dependency-based embeddings resultsin a complementary word representationwhich offers a boost in performance ofaround 1.5 BLEU points compared to stan-dardWORD2VECwhen monolingual datais limited to 1 million sentences per lan-guage. We also find that the inclusion ofsub-word information is crucial to improv-ing the quality of the embedding

    Google Translate as a Resource for Writing

    Get PDF
    This action research was prompted by the question “Does using Google Translate improve students’ writing?” When implemented according to guidelines from previous research, students produced writing with fewer syntactic and semantic errors. Data was collected through student work samples, classroom observations, and a questionnaire. Three themes arose through data analysis: that students found value in Google Translate, that one instance of instruction was not sufficient for all students to acquire necessary strategies, and that students’ lack of familiarity with both Google Translate and the Spanish language hampered their success. These findings demonstrate the importance of providing teachers and students with instruction on Google Translate, as well as the need for further research on the effects of translators on vocabulary acquisition

    Some Contributions to Interactive Machine Translation and to the Applications of Machine Translation for Historical Documents

    Full text link
    [ES] Los documentos históricos son una parte importante de nuestra herencia cultural. Sin embargo, debido a la barrera idiomática inherente en el lenguaje humano y a las propiedades lingüísticas de estos documentos, su accesibilidad está principalmente restringida a los académicos. Por un lado, el lenguaje humano evoluciona con el paso del tiempo. Por otro lado, las convenciones ortográficas no se crearon hasta hace poco y, por tanto, la ortografía cambia según el período temporal y el autor. Por estas razones, el trabajo de los académicos es necesario para que los no expertos puedan obtener una comprensión básica de un documento determinado. En esta tesis abordamos dos tareas relacionadas con el procesamiento de documentos históricos. La primera tarea es la modernización del lenguaje que, a fin de hacer que los documentos históricos estén más accesibles para los no expertos, tiene como objetivo reescribir un documento utilizando la versión moderna del idioma original del documento. La segunda tarea es la normalización ortográfica. Las propiedades lingüísticas de los documentos históricos mencionadas con anterioridad suponen un desafío adicional para la aplicación efectiva del procesado del lenguaje natural en estos documentos. Por lo tanto, esta tarea tiene como objetivo adaptar la ortografía de un documento a los estándares modernos a fin de lograr una consistencia ortográfica. Ambas tareas las afrontamos desde una perspectiva de traducción automática, considerando el idioma original de un documento como el idioma fuente, y su homólogo moderno/normalizado como el idioma objetivo. Proponemos varios enfoques basados en la traducción automática estadística y neuronal, y llevamos a cabo una amplia experimentación que ratifica el potencial de nuestras contribuciones -en donde los enfoques estadísticos arrojan resultados iguales o mejores que los enfoques neuronales para la mayoría de los casos-. En el caso de la tarea de modernización del lenguaje, esta experimentación incluye una evaluación humana realizada con la ayuda de académicos y un estudio con usuarios que verifica que nuestras propuestas pueden ayudar a los no expertos a obtener una comprensión básica de un documento histórico sin la intervención de un académico. Como ocurre con cualquier problema de traducción automática, nuestras aplicaciones no están libres de errores. Por lo tanto, para obtener modernizaciones/normalizaciones perfectas, un académico debe supervisar y corregir los errores. Este es un procedimiento común en la industria de la traducción. La metodología de traducción automática interactiva tiene como objetivo reducir el esfuerzo necesario para obtener traducciones de alta calidad uniendo al agente humano y al sistema de traducción en un proceso de corrección cooperativo. Sin embargo,la mayoría de los protocolos interactivos siguen una estrategia de izquierda a derecha. En esta tesis desarrollamos un nuevo protocolo interactivo que rompe con esta barrera de izquierda a derecha. Hemos evaluado este nuevo protocolo en un entorno de traducción automática, obteniendo grandes reducciones del esfuerzo humano. Finalmente, dado que este marco interactivo es de aplicación general a cualquier problema de traducción, lo hemos aplicado -nuestro nuevo protocolo junto con uno de los protocolos clásicos de izquierda a derecha- a la modernización del lenguaje y a la normalización ortográfica. Al igual que en traducción automática, el marco interactivo logra disminuir el esfuerzo requerido para corregir los resultados de un sistema automático.[CA] Els documents històrics són una part important de la nostra herència cultural. No obstant això, degut a la barrera idiomàtica inherent en el llenguatge humà i a les propietats lingüístiques d'aquests documents, la seua accessibilitat està principalment restringida als acadèmics. D'una banda, el llenguatge humà evoluciona amb el pas del temps. D'altra banda, les convencions ortogràfiques no es van crear fins fa poc i, per tant, l'ortografia canvia segons el període temporal i l'autor. Per aquestes raons, el treball dels acadèmics és necessari perquè els no experts puguen obtindre una comprensió bàsica d'un document determinat. En aquesta tesi abordem dues tasques relacionades amb el processament de documents històrics. La primera tasca és la modernització del llenguatge que, a fi de fer que els documents històrics estiguen més accessibles per als no experts, té per objectiu reescriure un document utilitzant la versió moderna de l'idioma original del document. La segona tasca és la normalització ortogràfica. Les propietats lingüístiques dels documents històrics mencionades amb anterioritat suposen un desafiament addicional per a l'aplicació efectiva del processat del llenguatge natural en aquests documents. Per tant, aquesta tasca té per objectiu adaptar l'ortografia d'un document als estàndards moderns a fi d'aconseguir una consistència ortogràfica. Dues tasques les afrontem des d'una perspectiva de traducció automàtica, considerant l'idioma original d'un document com a l'idioma font, i el seu homòleg modern/normalitzat com a l'idioma objectiu. Proposem diversos enfocaments basats en la traducció automàtica estadística i neuronal, i portem a terme una àmplia experimentació que ratifica el potencial de les nostres contribucions -on els enfocaments estadístics obtenen resultats iguals o millors que els enfocaments neuronals per a la majoria dels casos-. En el cas de la tasca de modernització del llenguatge, aquesta experimentació inclou una avaluació humana realitzada amb l'ajuda d'acadèmics i un estudi amb usuaris que verifica que les nostres propostes poden ajudar als no experts a obtindre una comprensió bàsica d'un document històric sense la intervenció d'un acadèmic. Com ocurreix amb qualsevol problema de traducció automàtica, les nostres aplicacions no estan lliures d'errades. Per tant, per obtindre modernitzacions/normalitzacions perfectes, un acadèmic ha de supervisar i corregir les errades. Aquest és un procediment comú en la indústria de la traducció. La metodologia de traducció automàtica interactiva té per objectiu reduir l'esforç necessari per obtindre traduccions d'alta qualitat unint a l'agent humà i al sistema de traducció en un procés de correcció cooperatiu. Tot i això, la majoria dels protocols interactius segueixen una estratègia d'esquerra a dreta. En aquesta tesi desenvolupem un nou protocol interactiu que trenca amb aquesta barrera d'esquerra a dreta. Hem avaluat aquest nou protocol en un entorn de traducció automàtica, obtenint grans reduccions de l'esforç humà. Finalment, atès que aquest marc interactiu és d'aplicació general a qualsevol problema de traducció, l'hem aplicat -el nostre nou protocol junt amb un dels protocols clàssics d'esquerra a dreta- a la modernització del llenguatge i a la normalitzaciò ortogràfica. De la mateixa manera que en traducció automàtica, el marc interactiu aconsegueix disminuir l'esforç requerit per corregir els resultats d'un sistema automàtic.[EN] Historical documents are an important part of our cultural heritage. However,due to the language barrier inherent in human language and the linguistic properties of these documents, their accessibility is mostly limited to scholars. On the one hand, human language evolves with the passage of time. On the other hand, spelling conventions were not created until recently and, thus, orthography changes depending on the time period and author. For these reasons, the work of scholars is needed for non-experts to gain a basic understanding of a given document. In this thesis, we tackle two tasks related with the processing of historical documents. The first task is language modernization which, in order to make historical documents more accessible to non-experts, aims to rewrite a document using the modern version of the document's original language. The second task is spelling normalization. The aforementioned linguistic properties of historical documents suppose an additional challenge for the effective natural language processing of these documents. Thus, this task aims to adapt a document's spelling to modern standards in order to achieve an orthography consistency. We affront both task from a machine translation perspective, considering a document's original language as the source language, and its modern/normalized counterpart as the target language. We propose several approaches based on statistical and neural machine translation, and carry out a wide experimentation that shows the potential of our contributions¿with the statistical approaches yielding equal or better results than the neural approaches in most of the cases. For the language modernization task, this experimentation includes a human evaluation conducted with the help of scholars and a user study that verifies that our proposals are able to help non-experts to gain a basic understanding of a historical document without the intervention of a scholar. As with any machine translation problem, our applications are not error-free. Thus, to obtain perfect modernizations/normalizations, a scholar needs to supervise and correct the errors. This is a common procedure in the translation industry. The interactive machine translation framework aims to reduce the effort needed for obtaining high quality translations by embedding the human agent and the translation system into a cooperative correction process. However, most interactive protocols follow a left-to-right strategy. In this thesis, we developed a new interactive protocol that breaks this left-to-right barrier. We evaluated this new protocol in a machine translation environment, obtaining large reductions of the human effort. Finally, since this interactive framework is of general application to any translation problem, we applied it¿our new protocol together with one of the classic left-to-right protocols¿to language modernization and spelling normalization. As with machine translation, the interactive framework diminished the effort required for correcting the outputs of an automatic system.The research leading to this thesis has been partially funded by Ministerio de Economía y Competitividad (MINECO) under projects SmartWays (grant agreement RTC-2014-1466-4), CoMUN-HaT (grant agreement TIN2015-70924-C2-1-R) and MISMISFAKEnHATE (grant agreement PGC2018-096212-B-C31); Generalitat Valenciana under projects ALMAMATER (grant agreement PROMETEOII/2014/030) and DeepPattern (grant agreement PROMETEO/2019/121); the European Union through Programa Operativo del Fondo Europeo de Desarrollo Regional (FEDER) from Comunitat Valenciana (2014–2020) under project Sistemas de frabricación inteligentes para la indústria 4.0 (grant agreement ID-IFEDER/2018/025); and the PRHLT research center under the research line Machine Learning Applications.Domingo Ballester, M. (2022). Some Contributions to Interactive Machine Translation and to the Applications of Machine Translation for Historical Documents [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/181231TESI

    Fine-Grained Linguistic Soft Constraints on Statistical Natural Language Processing Models

    Get PDF
    This dissertation focuses on effective combination of data-driven natural language processing (NLP) approaches with linguistic knowledge sources that are based on manual text annotation or word grouping according to semantic commonalities. I gainfully apply fine-grained linguistic soft constraints -- of syntactic or semantic nature -- on statistical NLP models, evaluated in end-to-end state-of-the-art statistical machine translation (SMT) systems. The introduction of semantic soft constraints involves intrinsic evaluation on word-pair similarity ranking tasks, extension from words to phrases, application in a novel distributional paraphrase generation technique, and an introduction of a generalized framework of which these soft semantic and syntactic constraints can be viewed as instances, and in which they can be potentially combined. Fine granularity is key in the successful combination of these soft constraints, in many cases. I show how to softly constrain SMT models by adding fine-grained weighted features, each preferring translation of only a specific syntactic constituent. Previous attempts using coarse-grained features yielded negative results. I also show how to softly constrain corpus-based semantic models of words (“distributional profiles”) to effectively create word-sense-aware models, by using semantic word grouping information found in a manually compiled thesaurus. Previous attempts, using hard constraints and resulting in aggregated, coarse-grained models, yielded lower gains. A novel paraphrase generation technique incorporating these soft semantic constraints is then also evaluated in a SMT system. This paraphrasing technique is based on the Distributional Hypothesis. The main advantage of this novel technique over current “pivoting” techniques for paraphrasing is the independence from parallel texts, which are a limited resource. The evaluation is done by augmenting translation models with paraphrase-based translation rules, where fine-grained scoring of paraphrase-based rules yields significantly higher gains. The model augmentation includes a novel semantic reinforcement component: In many cases there are alternative paths of generating a paraphrase-based translation rule. Each of these paths reinforces a dedicated score for the “goodness” of the new translation rule. This augmented score is then used as a soft constraint, in a weighted log-linear feature, letting the translation model learn how much to “trust” the paraphrase-based translation rules. The work reported here is the first to use distributional semantic similarity measures to improve performance of an end-to-end phrase-based SMT system. The unified framework for statistical NLP models with soft linguistic constraints enables, in principle, the combination of both semantic and syntactic constraints -- and potentially other constraints, too -- in a single SMT model
    corecore