7 research outputs found

    Online learning approaches in Computer Assisted Translation

    Get PDF
    open3We present a novel online learning approach for statistical machine translation tailored to the computer assisted translation scenario. With the introduction of a simple online feature, we are able to adapt the translation model on the fly to the corrections made by the translators. Additionally, we do online adaption of the feature weights with a large margin algorithm. Our results show that our online adaptation technique outperforms the static phrase based statistical machine translation system by 6 BLEU points absolute, and a standard incremental adaptation approach by 2 BLEU points absolute.Prashant M.; Cettolo M.; Federico M.Prashant, M.; Cettolo, Mauro; Federico, Marcell

    Reinforcement Learning for Machine Translation: from Simulations to Real-World Applications

    Get PDF
    If a machine translation is wrong, how we can tell the underlying model to fix it? Answering this question requires (1) a machine learning algorithm to define update rules, (2) an interface for feedback to be submitted, and (3) expertise on the side of the human who gives the feedback. This thesis investigates solutions for machine learning updates, the suitability of feedback interfaces, and the dependency on reliability and expertise for different types of feedback. We start with an interactive online learning scenario where a machine translation (MT) system receives bandit feedback (i.e. only once per source) instead of references for learning. Policy gradient algorithms for statistical and neural MT are developed to learn from absolute and pairwise judgments. Our experiments on domain adaptation with simulated online feedback show that the models can largely improve under weak feedback, with variance reduction techniques being very effective. In production environments offline learning is often preferred over online learning. We evaluate algorithms for counterfactual learning from human feedback in a study on eBay product title translations. Feedback is either collected via explicit star ratings from users, or implicitly from the user interaction with cross-lingual product search. Leveraging implicit feedback turns out to be more successful due to lower levels of noise. We compare the reliability and learnability of absolute Likert-scale ratings with pairwise preferences in a smaller user study, and find that absolute ratings are overall more effective for improvements in down-stream tasks. Furthermore, we discover that error markings provide a cheap and practical alternative to error corrections. In a generalized interactive learning framework we propose a self-regulation approach, where the learner, guided by a regulator module, decides which type of feedback to choose for each input. The regulator is reinforced to find a good trade-off between supervision effect and cost. In our experiments, it discovers strategies that are more efficient than active learning and standard fully supervised learning

    On the effective deployment of current machine translation technology

    Full text link
    Machine translation is a fundamental technology that is gaining more importance each day in our multilingual society. Companies and particulars are turning their attention to machine translation since it dramatically cuts down their expenses on translation and interpreting. However, the output of current machine translation systems is still far from the quality of translations generated by human experts. The overall goal of this thesis is to narrow down this quality gap by developing new methodologies and tools that improve the broader and more efficient deployment of machine translation technology. We start by proposing a new technique to improve the quality of the translations generated by fully-automatic machine translation systems. The key insight of our approach is that different translation systems, implementing different approaches and technologies, can exhibit different strengths and limitations. Therefore, a proper combination of the outputs of such different systems has the potential to produce translations of improved quality. We present minimum Bayes¿ risk system combination, an automatic approach that detects the best parts of the candidate translations and combines them to generate a consensus translation that is optimal with respect to a particular performance metric. We thoroughly describe the formalization of our approach as a weighted ensemble of probability distributions and provide efficient algorithms to obtain the optimal consensus translation according to the widespread BLEU score. Empirical results show that the proposed approach is indeed able to generate statistically better translations than the provided candidates. Compared to other state-of-the-art systems combination methods, our approach reports similar performance not requiring any additional data but the candidate translations. Then, we focus our attention on how to improve the utility of automatic translations for the end-user of the system. Since automatic translations are not perfect, a desirable feature of machine translation systems is the ability to predict at run-time the quality of the generated translations. Quality estimation is usually addressed as a regression problem where a quality score is predicted from a set of features that represents the translation. However, although the concept of translation quality is intuitively clear, there is no consensus on which are the features that actually account for it. As a consequence, quality estimation systems for machine translation have to utilize a large number of weak features to predict translation quality. This involves several learning problems related to feature collinearity and ambiguity, and due to the ¿curse¿ of dimensionality. We address these challenges by adopting a two-step training methodology. First, a dimensionality reduction method computes, from the original features, the reduced set of features that better explains translation quality. Then, a prediction model is built from this reduced set to finally predict the quality score. We study various reduction methods previously used in the literature and propose two new ones based on statistical multivariate analysis techniques. More specifically, the proposed dimensionality reduction methods are based on partial least squares regression. The results of a thorough experimentation show that the quality estimation systems estimated following the proposed two-step methodology obtain better prediction accuracy that systems estimated using all the original features. Moreover, one of the proposed dimensionality reduction methods obtained the best prediction accuracy with only a fraction of the original features. This feature reduction ratio is important because it implies a dramatic reduction of the operating times of the quality estimation system. An alternative use of current machine translation systems is to embed them within an interactive editing environment where the system and a human expert collaborate to generate error-free translations. This interactive machine translation approach have shown to reduce supervision effort of the user in comparison to the conventional decoupled post-edition approach. However, interactive machine translation considers the translation system as a passive agent in the interaction process. In other words, the system only suggests translations to the user, who then makes the necessary supervision decisions. As a result, the user is bound to exhaustively supervise every suggested translation. This passive approach ensures error-free translations but it also demands a large amount of supervision effort from the user. Finally, we study different techniques to improve the productivity of current interactive machine translation systems. Specifically, we focus on the development of alternative approaches where the system becomes an active agent in the interaction process. We propose two different active approaches. On the one hand, we describe an active interaction approach where the system informs the user about the reliability of the suggested translations. The hope is that this information may help the user to locate translation errors thus improving the overall translation productivity. We propose different scores to measure translation reliability at the word and sentence levels and study the influence of such information in the productivity of an interactive machine translation system. Empirical results show that the proposed active interaction protocol is able to achieve a large reduction in supervision effort while still generating translations of very high quality. On the other hand, we study an active learning framework for interactive machine translation. In this case, the system is not only able to inform the user of which suggested translations should be supervised, but it is also able to learn from the user-supervised translations to improve its future suggestions. We develop a value-of-information criterion to select which automatic translations undergo user supervision. However, given its high computational complexity, in practice we study different selection strategies that approximate this optimal criterion. Results of a large scale experimentation show that the proposed active learning framework is able to obtain better compromises between the quality of the generated translations and the human effort required to obtain them. Moreover, in comparison to a conventional interactive machine translation system, our proposal obtained translations of twice the quality with the same supervision effort.González Rubio, J. (2014). On the effective deployment of current machine translation technology [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/37888TESI

    Preference Learning for Machine Translation

    Get PDF
    Automatic translation of natural language is still (as of 2017) a long-standing but unmet promise. While advancing at a fast rate, the underlying methods are still far from actually being able to reliably capture syntax or semantics of arbitrary utterances of natural language, way off transporting the encoded meaning into a second language. However, it is possible to build useful translating machines when the target domain is well known and the machine is able to learn and adapt efficiently and promptly from new inputs. This is possible thanks to efficient and effective machine learning methods which can be applied to automatic translation. In this work we present and evaluate methods for three distinct scenarios: a) We develop algorithms that can learn from very large amounts of data by exploiting pairwise preferences defined over competing translations, which can be used to make a machine translation system robust to arbitrary texts from varied sources, but also enable it to learn effectively to adapt to new domains of data; b) We describe a method that is able to efficiently learn external models which adhere to fine-grained preferences that are extracted from a constricted selection of translated material, e.g. for adapting to users or groups of users in a computer-aided translation scenario; c) We develop methods for two machine translation paradigms, neural- and traditional statistical machine translation, to directly adapt to user-defined preferences in an interactive post-editing scenario, learning precisely adapted machine translation systems. In all of these settings, we show that machine translation can be made significantly more useful by careful optimization via preference learning

    Interactivity, Adaptation and Multimodality in Neural Sequence-to-sequence Learning

    Full text link
    [ES] El problema conocido como de secuencia a secuencia consiste en transformar una secuencia de entrada en una secuencia de salida. Bajo esta perspectiva se puede atacar una amplia cantidad de problemas, entre los cuales destacan la traducción automática o la descripción automática de objetos multimedia. La aplicación de redes neuronales profundas ha revolucionado esta disciplina, y se han logrado avances notables. Pero los sistemas automáticos todavía producen predicciones que distan mucho de ser perfectas. Para obtener predicciones de gran calidad, los sistemas automáticos se utilizan bajo la supervisión de un humano, quien corrige los errores. Esta tesis se centra principalmente en el problema de la traducción del lenguaje natural, usando modelos enteramente neuronales. Nuestro objetivo es desarrollar sistemas de traducción neuronal más eficientes. asentándonos sobre dos pilares fundamentales: cómo utilizar el sistema de una forma más eficiente y cómo aprovechar datos generados durante la fase de explotación del mismo. En el primer caso, aplicamos el marco teórico conocido como predicción interactiva a la traducción automática neuronal. Este proceso consiste en integrar usuario y sistema en un proceso de corrección cooperativo, con el objetivo de reducir el esfuerzo humano empleado en obtener traducciones de alta calidad. Desarrollamos distintos protocolos de interacción para dicha tecnología, aplicando interacción basada en prefijos y en segmentos, implementados modificando el proceso de búsqueda del sistema. Además, ideamos mecanismos para obtener una interacción con el sistema más precisa, manteniendo la velocidad de generación del mismo. Llevamos a cabo una extensa experimentación, que muestra el potencial de estas técnicas: superamos el estado del arte anterior por un gran margen y observamos que nuestros sistemas reaccionan mejor a las interacciones humanas. A continuación, estudiamos cómo mejorar un sistema neuronal mediante los datos generados como subproducto de este proceso de corrección. Para ello, nos basamos en dos paradigmas del aprendizaje automático: el aprendizaje muestra a muestra y el aprendizaje activo. En el primer caso, el sistema se actualiza inmediatamente después de que el usuario corrige una frase, aprendiendo de una manera continua a partir de correcciones, evitando cometer errores previos y especializándose en un usuario o dominio concretos. Evaluamos estos sistemas en una gran cantidad de situaciones y dominios diferentes, que demuestran el potencial que tienen los sistemas adaptativos. También llevamos a cabo una evaluación humana, con traductores profesionales. Éstos quedaron muy satisfechos con el sistema adaptativo. Además, fueron más eficientes cuando lo usaron, comparados con un sistema estático. El segundo paradigma lo aplicamos en un escenario en el que se deban traducir grandes cantidades de frases, siendo inviable la supervisión de todas. El sistema selecciona aquellas muestras que vale la pena supervisar, traduciendo el resto automáticamente. Aplicando este protocolo, redujimos de aproximadamente un cuarto el esfuerzo humano necesario para llegar a cierta calidad de traducción. Finalmente, atacamos el complejo problema de la descripción de objetos multimedia. Este problema consiste en describir en lenguaje natural un objeto visual, una imagen o un vídeo. Comenzamos con la tarea de descripción de vídeos pertenecientes a un dominio general. A continuación, nos movemos a un caso más específico: la descripción de eventos a partir de imágenes egocéntricas, capturadas a lo largo de un día. Buscamos extraer relaciones entre eventos para generar descripciones más informadas, desarrollando un sistema capaz de analizar un mayor contexto. El modelo con contexto extendido genera descripciones de mayor calidad que un modelo básico. Por último, aplicamos la predicción interactiva a estas tareas multimedia, disminuyendo el esfuerzo necesa[CA] El problema conegut com a de seqüència a seqüència consisteix en transformar una seqüència d'entrada en una seqüència d'eixida. Seguint aquesta perspectiva, es pot atacar una àmplia quantitat de problemes, entre els quals destaquen la traducció automàtica, el reconeixement automàtic de la parla o la descripció automàtica d'objectes multimèdia. L'aplicació de xarxes neuronals profundes ha revolucionat aquesta disciplina, i s'han aconseguit progressos notables. Però els sistemes automàtics encara produeixen prediccions que disten molt de ser perfectes. Per a obtindre prediccions de gran qualitat, els sistemes automàtics són utilitzats amb la supervisió d'un humà, qui corregeix els errors. Aquesta tesi se centra principalment en el problema de la traducció de llenguatge natural, el qual s'ataca emprant models enterament neuronals. El nostre objectiu principal és desenvolupar sistemes més eficients. Per a aquesta tasca, les nostres contribucions s'assenten sobre dos pilars fonamentals: com utilitzar el sistema d'una manera més eficient i com aprofitar dades generades durant la fase d'explotació d'aquest. En el primer cas, apliquem el marc teòric conegut com a predicció interactiva a la traducció automàtica neuronal. Aquest procés consisteix en integrar usuari i sistema en un procés de correcció cooperatiu, amb l'objectiu de reduir l'esforç humà emprat per obtindre traduccions d'alta qualitat. Desenvolupem diferents protocols d'interacció per a aquesta tecnologia, aplicant interacció basada en prefixos i en segments, implementats modificant el procés de cerca del sistema. A més a més, busquem mecanismes per a obtindre una interacció amb el sistema més precisa, mantenint la velocitat de generació. Duem a terme una extensa experimentació, que mostra el potencial d'aquestes tècniques: superem l'estat de l'art anterior per un gran marge i observem que els nostres sistemes reaccionen millor a les interacciones humanes. A continuació, estudiem com millorar un sistema neuronal mitjançant les dades generades com a subproducte d'aquest procés de correcció. Per a això, ens basem en dos paradigmes de l'aprenentatge automàtic: l'aprenentatge mostra a mostra i l'aprenentatge actiu. En el primer cas, el sistema s'actualitza immediatament després que l'usuari corregeix una frase. Per tant, el sistema aprén d'una manera contínua a partir de correccions, evitant cometre errors previs i especialitzant-se en un usuari o domini concrets. Avaluem aquests sistemes en una gran quantitat de situacions i per a dominis diferents, que demostren el potencial que tenen els sistemes adaptatius. També duem a terme una avaluació amb traductors professionals, qui varen quedar molt satisfets amb el sistema adaptatiu. A més, van ser més eficients quan ho van usar, si ho comparem amb el sistema estàtic. Pel que fa al segon paradigma, l'apliquem per a l'escenari en el qual han de traduir-se grans quantitats de frases, i la supervisió de totes elles és inviable. En aquest cas, el sistema selecciona les mostres que paga la pena supervisar, traduint la resta automàticament. Aplicant aquest protocol, reduírem en aproximadament un quart l'esforç necessari per a arribar a certa qualitat de traducció. Finalment, ataquem el complex problema de la descripció d'objectes multimèdia. Aquest problema consisteix en descriure, en llenguatge natural, un objecte visual, una imatge o un vídeo. Comencem amb la tasca de descripció de vídeos d'un domini general. A continuació, ens movem a un cas més específic: la descripció d''esdeveniments a partir d'imatges egocèntriques, capturades al llarg d'un dia. Busquem extraure relacions entre ells per a generar descripcions més informades, desenvolupant un sistema capaç d'analitzar un major context. El model amb context estés genera descripcions de major qualitat que el model bàsic. Finalment, apliquem la predicció interactiva a aquestes tasques multimèdia, di[EN] The sequence-to-sequence problem consists in transforming an input sequence into an output sequence. A variety of problems can be posed in these terms, including machine translation, speech recognition or multimedia captioning. In the last years, the application of deep neural networks has revolutionized these fields, achieving impressive advances. However and despite the improvements, the output of the automatic systems is still far to be perfect. For achieving high-quality predictions, fully-automatic systems require to be supervised by a human agent, who corrects the errors. This is a common procedure in the translation industry. This thesis is mainly framed into the machine translation problem, tackled using fully neural systems. Our main objective is to develop more efficient neural machine translation systems, that allow for a more productive usage and deployment of the technology. To this end, we base our contributions on two main cornerstones: how to better use of the system and how to better leverage the data generated along its usage. First, we apply the so-called interactive-predictive framework to neural machine translation. This embeds the human agent and the system into a cooperative correction process, that seeks to reduce the human effort spent for obtaining high-quality translations. We develop different interactive protocols for the neural machine translation technology, namely, a prefix-based and a segment-based protocols. They are implemented by modifying the search space of the model. Moreover, we introduce mechanisms for achieving a fine-grained interaction while maintaining the decoding speed of the system. We carried out a wide experimentation that shows the potential of our contributions. The previous state of the art is overcame by a large margin and the current systems are able to react better to the human interactions. Next, we study how to improve a neural system using the data generated as a byproduct of this correction process. To this end, we rely on two main learning paradigms: online and active learning. Under the first one, the system is updated on the fly, as soon as a sentence is corrected. Hence, the system is continuously learning from the corrections, avoiding previous errors and specializing towards a given user or domain. A large experimentation stressed the adaptive systems under different conditions and domains, demonstrating the capabilities of adaptive systems. Moreover, we also carried out a human evaluation of the system, involving professional users. They were very pleased with the adaptive system, and worked more efficiently using it. The second paradigm, active learning, is devised for the translation of huge amounts of data, that are infeasible to being completely supervised. In this scenario, the system selects samples that are worth to be supervised, and leaves the rest automatically translated. Applying this framework, we obtained reductions of approximately a quarter of the effort required for reaching a desired translation quality. The neural approach also obtained large improvements compared with previous translation technologies. Finally, we address another challenging problem: visual captioning. It consists in generating a description in natural language from a visual object, namely an image or a video. We follow the sequence-to-sequence framework, under a a multimodal perspective. We start by tackling the task of generating captions of videos from a general domain. Next, we move on to a more specific case: describing events from egocentric images, acquired along the day. Since these events are consecutive, we aim to extract inter-eventual relationships, for generating more informed captions. The context-aware model improved the generation quality with respect to a regular one. As final point, we apply the intractive-predictive protocol to these multimodal captioning systems, reducing the effort required for correcting the outputs.Section 5.4 describes an user evaluation of an adaptive translation system. This was done in collaboration with Miguel Domingo and the company Pangeanic, with funding from the Spanish Center for Technological and Industrial Development (Centro para el Desarrollo Tecnológico Industrial). [...] Most of Chapter 6 is the result of a collaboration with Marc Bolaños, supervised by Prof. Petia Radeva, from Universitat de Barcelona/CVC. This collaboration was supported by the R-MIPRCV network, under grant TIN2014-54728-REDC.Peris Abril, Á. (2019). Interactivity, Adaptation and Multimodality in Neural Sequence-to-sequence Learning [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/134058TESI

    Dynamic topic adaptation for improved contextual modelling in statistical machine translation

    Get PDF
    In recent years there has been an increased interest in domain adaptation techniques for statistical machine translation (SMT) to deal with the growing amount of data from different sources. Topic modelling techniques applied to SMT are closely related to the field of domain adaptation but more flexible in dealing with unstructured text. Topic models can capture latent structure in texts and are therefore particularly suitable for modelling structure in between and beyond corpus boundaries, which are often arbitrary. In this thesis, the main focus is on dynamic translation model adaptation to texts of unknown origin, which is a typical scenario for an online MT engine translating web documents. We introduce a new bilingual topic model for SMT that takes the entire document context into account and for the first time directly estimates topic-dependent phrase translation probabilities in a Bayesian fashion. We demonstrate our model’s ability to improve over several domain adaptation baselines and further provide evidence for the advantages of bilingual topic modelling for SMT over the more common monolingual topic modelling. We also show improved performance when deriving further adapted translation features from the same model which measure different aspects of topical relatedness. We introduce another new topic model for SMT which exploits the distributional nature of phrase pair meaning by modelling topic distributions over phrase pairs using their distributional profiles. Using this model, we explore combinations of local and global contextual information and demonstrate the usefulness of different levels of contextual information, which had not been previously examined for SMT. We also show that combining this model with a topic model trained at the document-level further improves performance. Our dynamic topic adaptation approach performs competitively in comparison with two supervised domain-adapted systems. Finally, we shed light on the relationship between domain adaptation and topic adaptation and propose to combine multi-domain adaptation and topic adaptation in a framework that entails automatic prediction of domain labels at the document level. We show that while each technique provides complementary benefits to the overall performance, there is an amount of overlap between domain and topic adaptation. This can be exploited to build systems that require less adaptation effort at runtime
    corecore