3 research outputs found

    Data augmentation and subword segmentation for spell-checking in amazonian languages

    Get PDF
    En el Perú se han identificado 48 lenguas originarias, según la información extraída de la Base de Datos oficial de Pueblos Indígenas u originarios (BDPI). Estas son de tradición oral [BDPI, 2020]. Por lo que no había una forma oficial de enseñanza. El Instituto Linguistico de Verano (ILV) recopiló y documentó diversas lenguas nativas [Faust, 1973], como un primer intento para tener un documento formal para la enseñanza de una lengua originaria. Fue después que el Gobierno Peruano con su estrategia de inclusión social “Incluir para crecer” creó una guía oficial para la enseñanza de las lenguas originarias en su intento de normalizar el uso de estas lenguas [Jara Males, Gonzales Acer, 2015]. Como se menciona en [Forcada, 2016], el uso de tecnologías del lenguaje permite obtener una normalidad, incremento de literatura, estandarización y mayor visibilidad. En el caso de Perú, ha habido iniciativas, como analizadores morfológicos [Pereira-Noriega, et al., 2017] o correctores ortográficos [Alva, Oncevay, 2017], enfocados en las lenguas originarias de escasos recursos computacionales que pretenden apoyar el esfuerzo de revitalización, la educación indígena y la documentación de las lenguas [Zariquiey et al., 2019]. Enfocándose en lenguas amazónicas se realizó un proyecto utilizando redes neuronales para desarrollar un corrector ortográfico enfocado en las lenguas originarias con buenos resultados a nivel de precisión [Lara, 2020]. En ese trabajo, al disponer de poca cantidad de datos se generaron datos sintéticos con un método aleatorio los cuales al ser evaluados con las métricas CharacTER [Wang, et al., 2016] y BLEU [Papineni, et al., 2002] obtuvieron resultados bastante bajos. Además, las lenguas amazónicas al ser ricas a nivel morfológico y tener un vocabulario extenso es difícil representar palabras fuera del vocabulario, por lo que es recomendable usar sub-palabras como término medio [Wu, Zhao, 2018]. El presente proyecto desarrolla distintos métodos de generación de datos, diferentes al aleatorio, que son más robustos al considerar errores que son más cercanos a la realidad. A su vez, para reducir el costo computacional y mantener la capacidad de generar un vocabulario abierto, adicionalmente se entrena redes neuronales que reciban como entrada sub-palabras tales como sílabas y segmentos divididos por byte pair encoding (BPE). Finalmente, de los experimentos concluimos que hubo mejoras con los métodos y la segmentación propuesta y se tienen más recursos computacionales para nuestras lenguas amazónicas

    A hybrid neural machine translation technique for translating low resource languages

    Full text link
    © Springer International Publishing AG, part of Springer Nature 2018. Neural machine translation (NMT) has produced very promising results on various high resource languages that have sizeable parallel datasets. However, low resource languages that lack sufficient parallel datasets face challenges in the automated translation filed. The main part of NMT is a recurrent neural network, which can work with sequential data at word and sentence levels, given that sequences are not too long. Due to the large number of word and sequence combinations, a parallel dataset is required, which unfortunately is not always available, particularly for low resource languages. Therefore, we adapted a character neural translation model that was based on a combined structure of recurrent neural network and convolutional neural network. This model was trained on the IWSLT 2016 Arabic—English and the IWSLT 2015 English—Vietnamese datasets. The model produced encouraging results particularly on the Arabic datasets, where Arabic is considered a rich morphological language

    Comparative Evaluation of Translation Memory (TM) and Machine Translation (MT) Systems in Translation between Arabic and English

    Get PDF
    In general, advances in translation technology tools have enhanced translation quality significantly. Unfortunately, however, it seems that this is not the case for all language pairs. A concern arises when the users of translation tools want to work between different language families such as Arabic and English. The main problems facing ArabicEnglish translation tools lie in Arabic’s characteristic free word order, richness of word inflection – including orthographic ambiguity – and optionality of diacritics, in addition to a lack of data resources. The aim of this study is to compare the performance of translation memory (TM) and machine translation (MT) systems in translating between Arabic and English.The research evaluates the two systems based on specific criteria relating to needs and expected results. The first part of the thesis evaluates the performance of a set of well-known TM systems when retrieving a segment of text that includes an Arabic linguistic feature. As it is widely known that TM matching metrics are based solely on the use of edit distance string measurements, it was expected that the aforementioned issues would lead to a low match percentage. The second part of the thesis evaluates multiple MT systems that use the mainstream neural machine translation (NMT) approach to translation quality. Due to a lack of training data resources and its rich morphology, it was anticipated that Arabic features would reduce the translation quality of this corpus-based approach. The systems’ output was evaluated using both automatic evaluation metrics including BLEU and hLEPOR, and TAUS human quality ranking criteria for adequacy and fluency.The study employed a black-box testing methodology to experimentally examine the TM systems through a test suite instrument and also to translate Arabic English sentences to collect the MT systems’ output. A translation threshold was used to evaluate the fuzzy matches of TM systems, while an online survey was used to collect participants’ responses to the quality of MT system’s output. The experiments’ input of both systems was extracted from ArabicEnglish corpora, which was examined by means of quantitative data analysis. The results show that, when retrieving translations, the current TM matching metrics are unable to recognise Arabic features and score them appropriately. In terms of automatic translation, MT produced good results for adequacy, especially when translating from Arabic to English, but the systems’ output appeared to need post-editing for fluency. Moreover, when retrievingfrom Arabic, it was found that short sentences were handled much better by MT than by TM. The findings may be given as recommendations to software developers
    corecore