98 research outputs found

    Embedding Web-based Statistical Translation Models in Cross-Language Information Retrieval

    Get PDF
    Although more and more language pairs are covered by machine translation services, there are still many pairs that lack translation resources. Cross-language information retrieval (CLIR) is an application which needs translation functionality of a relatively low level of sophistication since current models for information retrieval (IR) are still based on a bag-of-words. The Web provides a vast resource for the automatic construction of parallel corpora which can be used to train statistical translation models automatically. The resulting translation models can be embedded in several ways in a retrieval model. In this paper, we will investigate the problem of automatically mining parallel texts from the Web and different ways of integrating the translation models within the retrieval process. Our experiments on standard test collections for CLIR show that the Web-based translation models can surpass commercial MT systems in CLIR tasks. These results open the perspective of constructing a fully automatic query translation device for CLIR at a very low cost.Comment: 37 page

    Unveiling Biases in Word Embeddings: An Algorithmic Approach for Comparative Analysis Based on Alignment

    Get PDF
    openWord embeddings are state-of-the-art vectorial representation of words with the goal of preserving semantic similarity. They are the result of specific learning algorithms trained on usually large corpora. Consequently, they inherit all biases of the corpora on which they have been trained on. The goal of the thesis is to devise and adapt an efficient algorithm to compare two different word embeddings in order to highlight the biases they are subjected to. Specifically, we look for an alignment between the two vector spaces, corresponding to the two word embeddings, that minimises the difference between the stable words, i.e. the ones that have not changed in the two embeddings, thus highlighting the differences between the ones that did changed. In this work, we test this idea adapting a machine translation framework called MUSE that, after some improvements, can run over multiple cores in a HPC framework, specifically managed with SLURM. We also provide an amplpy implementation of linear and convex programming algorithms adapted to our case. We then test these techniques on a corpus of text taken from Italian newspapers in order to identify which words are more subject to change among the different pairs of corpora.Word embeddings are state-of-the-art vectorial representation of words with the goal of preserving semantic similarity. They are the result of specific learning algorithms trained on usually large corpora. Consequently, they inherit all biases of the corpora on which they have been trained on. The goal of the thesis is to devise and adapt an efficient algorithm to compare two different word embeddings in order to highlight the biases they are subjected to. Specifically, we look for an alignment between the two vector spaces, corresponding to the two word embeddings, that minimises the difference between the stable words, i.e. the ones that have not changed in the two embeddings, thus highlighting the differences between the ones that did changed. In this work, we test this idea adapting a machine translation framework called MUSE that, after some improvements, can run over multiple cores in a HPC framework, specifically managed with SLURM. We also provide an amplpy implementation of linear and convex programming algorithms adapted to our case. We then test these techniques on a corpus of text taken from Italian newspapers in order to identify which words are more subject to change among the different pairs of corpora

    Corpus access for beginners: the W3Corpora project

    Get PDF

    Rapid Resource Transfer for Multilingual Natural Language Processing

    Get PDF
    Until recently the focus of the Natural Language Processing (NLP) community has been on a handful of mostly European languages. However, the rapid changes taking place in the economic and political climate of the world precipitate a similar change to the relative importance given to various languages. The importance of rapidly acquiring NLP resources and computational capabilities in new languages is widely accepted. Statistical NLP models have a distinct advantage over rule-based methods in achieving this goal since they require far less manual labor. However, statistical methods require two fundamental resources for training: (1) online corpora (2) manual annotations. Creating these two resources can be as difficult as porting rule-based methods. This thesis demonstrates the feasibility of acquiring both corpora and annotations by exploiting existing resources for well-studied languages. Basic resources for new languages can be acquired in a rapid and cost-effective manner by utilizing existing resources cross-lingually. Currently, the most viable method of obtaining online corpora is converting existing printed text into electronic form using Optical Character Recognition (OCR). Unfortunately, a language that lacks online corpora most likely lacks OCR as well. We tackle this problem by taking an existing OCR system that was desgined for a specific language and using that OCR system for a language with a similar script. We present a generative OCR model that allows us to post-process output from a non-native OCR system to achieve accuracy close to, or better than, a native one. Furthermore, we show that the performance of a native or trained OCR system can be improved by the same method. Next, we demonstrate cross-utilization of annotations on treebanks. We present an algorithm that projects dependency trees across parallel corpora. We also show that a reasonable quality treebank can be generated by combining projection with a small amount of language-specific post-processing. The projected treebank allows us to train a parser that performs comparably to a parser trained on manually generated data

    Network-state dependent effects in naming and learning

    Get PDF

    Document-level machine translation : ensuring translational consistency of non-local phenomena

    Get PDF
    In this thesis, we study the automatic translation of documents by taking into account cross-sentence phenomena. This document-level information is typically ignored by most of the standard state-of-the-art Machine Translation (MT) systems, which focus on translating texts processing each of their sentences in isolation. Translating each sentence without looking at its surrounding context can lead to certain types of translation errors, such as inconsistent translations for the same word or for elements in a coreference chain. We introduce methods to attend to document-level phenomena in order to avoid those errors, and thus, reach translations that properly convey the original meaning. Our research starts by identifying the translation errors related to such document-level phenomena that commonly appear in the output of state-of-the-art Statistical Machine Translation (SMT) systems. For two of those errors, namely inconsistent word translations as well as gender and number disagreements among words, we design simple and yet effective post-processing techniques to tackle and correct them. Since these techniques are applied a posteriori, they can access the whole source and target documents, and hence, they are able to perform a global analysis and improve the coherence and consistency of the translation. Nevertheless, since following such a two-pass decoding strategy is not optimal in terms of efficiency, we also focus on introducing the context-awareness during the decoding process itself. To this end, we enhance a document-oriented SMT system with distributional semantic information in the form of bilingual and monolingual word embeddings. In particular, these embeddings are used as Semantic Space Language Models (SSLMs) and as a novel feature function. The goal of the former is to promote word translations that are semantically close to their preceding context, whereas the latter promotes the lexical choice that is closest to its surrounding context, for those words that have varying translations throughout the document. In both cases, the context extends beyond sentence boundaries. Recently, the MT community has transitioned to the neural paradigm. The finalstep of our research proposes an extension of the decoding process for a Neural Machine Translation (NMT) framework, independent of the model architecture, by shallow fusing the information from a neural translation model and the context semantics enclosed in the previously studied SSLMs. The aim of this modification is to introduce the benefits of context information also into the decoding process of NMT systems, as well as to obtain an additional validation for the techniques we explored. The automatic evaluation of our approaches does not reflect significant variations. This is expected since most automatic metrics are neither context-nor semantic-aware and because the phenomena we tackle are rare, leading to few modifications with respect to the baseline translations. On the other hand, manual evaluations demonstrate the positive impact of our approaches since human evaluators tend to prefer the translations produced by our document-aware systems. Therefore, the changes introduced by our enhanced systems are important since they are related to how humans perceive translation quality for long texts.En esta tesis se estudia la traducción automática de documentos teniendo en cuenta fenómenos que ocurren entre oraciones. Típicamente, esta información a nivel de documento se ignora por la mayoría de los sistemas de Traducción Automática (MT), que se centran en traducir los textos procesando cada una de las frases que los componen de manera aislada. Traducir cada frase sin mirar al contexto que la rodea puede llevar a generar cierto tipo de errores de traducción, como pueden ser traducciones inconsistentes para la misma palabra o para elementos que aparecen en la misma cadena de correferencia. En este trabajo se presentan métodos para prestar atención a fenómenos a nivel de documento con el objetivo de evitar este tipo de errores y así llegar a generar traducciones que transmitan correctamente el significado original del texto. Nuestra investigación empieza por identificar los errores de traducción relacionados con los fenómenos a nivel de documento que aparecen de manera común en la salida de los sistemas Estadísticos del Traducción Automática (SMT). Para dos de estos errores, la traducción inconsistente de palabras, así como los desacuerdos en género y número entre palabras, diseñamos técnicas simples pero efectivas como post-procesos para tratarlos y corregirlos. Como estas técnicas se aplican a posteriori, pueden acceder a los documentos enteros tanto del origen como la traducción generada, y así son capaces de hacer un análisis global y mejorar la coherencia y la consistencia de la traducción. Sin embargo, como seguir una estrategia de traducción en dos pasos no es óptima en términos de eficiencia, también nos centramos en introducir la conciencia del contexto durante el propio proceso de generación de la traducción. Para esto, extendemos un sistema SMT orientado a documentos incluyendo información semántica distribucional en forma de word embeddings bilingües y monolingües. En particular, estos embeddings se usan como un Modelo de Lenguaje de Espacio Semántico (SSLM) y como una nueva función característica del sistema. La meta del primero es promover traducciones de palabras que sean semánticamente cercanas a su contexto precedente, mientras que la segunda quiere promover la selección léxica que es más cercana a su contexto para aquellas palabras que tienen diferentes traducciones a lo largo de un documento. En ambos casos, el contexto que se tiene en cuenta va más allá de los límites de una frase u oración. Recientemente, la comunidad MT ha hecho una transición hacia el paradigma neuronal. El paso final de nuestra investigación propone una extensión del proceso de decodificación de un sistema de Traducción Automática Neuronal (NMT), independiente de la arquitectura del modelo de traducción, aplicando la técnica de Shallow Fusion para combinar la información del modelo de traducción neuronal y la información semántica del contexto encerrada en los modelos SSLM estudiados previamente. La motivación de esta modificación está en introducir los beneficios de la información del contexto también en el proceso de decodificación de los sistemas NMT, así como también obtener una validación adicional para las técnicas que se han ido explorando a lo largo de esta tesis. La evaluación automática de nuestras propuestas no refleja variaciones significativas. Esto es un comportamiento esperado ya que la mayoría de las métricas automáticas no se diseñan para ser sensibles al contexto o a la semántica, y además los fenómenos que tratamos son escasos, llevando a pocas modificaciones con respecto a las traducciones de partida. Por otro lado, las evaluaciones manuales demuestran el impacto positivo de nuestras propuestas ya que los evaluadores humanos tienen a preferir las traducciones generadas por nuestros sistemas a nivel de documento. Entonces, los cambios introducidos por nuestros sistemas extendidos son importantes porque están relacionados con la forma en que los humanos perciben la calidad de la traducción de textos largos.Postprint (published version
    corecore