172 research outputs found

    Recent advances in Apertium, a free/open-source rule-based machine translation platform for low-resource languages

    Get PDF
    This paper presents an overview of Apertium, a free and open-source rule-based machine translation platform. Translation in Apertium happens through a pipeline of modular tools, and the platform continues to be improved as more language pairs are added. Several advances have been implemented since the last publication, including some new optional modules: a module that allows rules to process recursive structures at the structural transfer stage, a module that deals with contiguous and discontiguous multi-word expressions, and a module that resolves anaphora to aid translation. Also highlighted is the hybridisation of Apertium through statistical modules that augment the pipeline, and statistical methods that augment existing modules. This includes morphological disambiguation, weighted structural transfer, and lexical selection modules that learn from limited data. The paper also discusses how a platform like Apertium can be a critical part of access to language technology for so-called low-resource languages, which might be ignored or deemed unapproachable by popular corpus-based translation technologies. Finally, the paper presents some of the released and unreleased language pairs, concluding with a brief look at some supplementary Apertium tools that prove valuable to users as well as language developers. All Apertium-related code, including language data, is free/open-source and available at https://github.com/apertium

    Sääntäpohjaista kieliteknologiaa Afrikan kielille

    Get PDF
    Africa is such a language area, where rule-based language technology could have a strong influence on the status of local languages. As statistical and neural approaches require large masses of text for training the language model, rule-based methods can be applied also to languages with no traditional language resources. The development of language technology systems for minor languages would not only provide useful tools for language users. It would also contribute to the elevated status of those languages and thus help in maintaining those languages to be alive. The chapter looks at the current situation in Africa particularly from the viewpoint of rule-based language technology.Peer reviewe

    Discontinuous grammar as a foreign language

    Get PDF
    [Abstract] In order to achieve deep natural language understanding, syntactic constituent parsing is a vital step, highly demanded by many artificial intelligence systems to process both text and speech. One of the most recent proposals is the use of standard sequence-to-sequence models to perform constituent parsing as a machine translation task, instead of applying task-specific parsers. While they show a competitive performance, these text-to-parse transducers are still lagging behind classic techniques in terms of accuracy, coverage and speed. To close the gap, we here extend the framework of sequence-to-sequence models for constituent parsing, not only by providing a more powerful neural architecture for improving their performance, but also by enlarging their coverage to handle the most complex syntactic phenomena: discontinuous structures. To that end, we design several novel linearizations that can fully produce discontinuities and, for the first time, we test a sequence-to-sequence model on the main discontinuous benchmarks, obtaining competitive results on par with task-specific discontinuous constituent parsers and achieving state-of-the-art scores on the (discontinuous) English Penn Treebank.Xunta de Galicia; ED431G 2019/01Xunta de Galicia; ED431C 2020/11We acknowledge the European Research Council (ERC), which has funded this research under the European Union’s Horizon 2020 research and innovation programme (FASTPARSE, grant agreement No 714150) and the Horizon Europe research and innovation programme (SALSA, grant agreement No 101100615), ERDF/ MICINN-AEI (SCANNER-UDC, PID2020-113230RB-C21), Xunta de Galicia (ED431C 2020/11), and Centro de Investigación de Galicia ‘‘CITIC”, funded by Xunta de Galicia and the European Union (ERDF - Galicia 2014–2020 Program), by grant ED431G 2019/01. Funding for open access charge: Universidade da Coruña/CISUG

    Japanese word prediction

    Get PDF
    This report deals with the implementation of a Japanese word prediction engine written by the author. As this type of software does not seem to exist for Japanese at the time of writing, it could prove useful in Japanese augmentative and alternative communication (AAC) as a software tool used to improve typing speed and reduce the amount of keystrokes needed to produce text. Word prediction, in contrast to the word completion software commonly found in mobile phones and word processor intellisense engines etc. is a technique for suggesting a followup word after a word has just been completed. This is usually done by providing a list of the most probable words to the user, sorted by commonality (general and user-specific frequency). Combined with good word completion software and a responsive user interface, word prediction is one of the most powerful assistive tools available to movement impaired users today. The main goals of the thesis will be to: 1. Answer as many of the questions raised by the language differences as possible. 2. Investigate further avenues of research in the subject. 3. Make a functional word prediction prototype for Japanese. All project code is in the public domain and is currently hosted at: http://www.mediafire.com/?rrhqtqsgp6ei6m

    Quality in human post-editing of machine-translated texts : error annotation and linguistic specifications for tackling register errors

    Get PDF
    During the last decade, machine translation has played an important role in the translation market and has become an essential tool for speeding up the translation process and for reducing the time and costs needed. Nevertheless, the quality of the results obtained is not completely satisfactory, as it is considerably variable, depending on numerous factors. Given this, it is necessary to combine MT with human intervention, by post-editing the machine-translated texts, in order to reach high-quality translations. This work aims at describing the MT process provided by Unbabel, a Portuguese start-up that combines MT with post-editing provided by online editors. The main objective of the study is to contribute to improving the quality of the translated text, by analyzing annotated translated texts, from English into Italian, to define linguistic specifications to improve the tools used at the start-up to aid human editors and annotators. The analysis of guidelines provided to the annotator to guide his/her editing process has also been developed, a task that contributed to improve the inter-annotator agreement, thus making the annotated data reliable. Accomplishing these goals allowed for the identification and the categorization of the most frequent errors in translated texts, namely errors whose resolution is bound to significantly improve the efficacy and quality of the translation. The data collected allowed us to identify register as the most frequent error category and also the one with the most impact on the quality of translations, and for these reasons this category is analyzed in more detail along the work. From the analysis of errors in this category, it was possible to define and implement a set of rules in the Smartcheck, a tool used at Unbabel to automatically detect errors in the target text produced by the MT system to guarantee a higher quality of the translated texts after post-edition.Nas últimas décadas, a tradução automática tem sido uma importante área de investigação, no âmbito da qual os investigadores têm vindo a conseguir melhorias nos resultados, obtendo mesmo resultados positivos. Hoje em dia, a tradução automática desempenha um papel muito importante no mercado da tradução, devido ao número cada vez maior de textos para traduzir e aos curtos prazos estabelecidos, bem como à pressão constante para se reduzir os custos. Embora a tradução automática seja usada cada vez com mais frequência, os resultados obtidos são variáveis e a qualidade das traduções nem sempre é satisfatória, dependendo dos paradigmas dos sistemas de tradução automática escolhidos, do domínio do texto a traduzir e da sintaxe e do léxico do texto de partida. Mais especificamente, os sistemas de tradução automática que foram desenvolvidos podem ser divididos entre sistemas baseados em conhecimento linguístico, sistemas orientados para os dados e sistemas híbridos, que combinam diferentes paradigmas. Recentemente, o paradigma neuronal tem tido uma aplicação muito expressiva, implicando mesmo a problematização da existência dos restantes paradigmas. Sendo que a qualidade dos resultados de tradução automática depende de diferentes fatores, para a melhorar, é necessário que haja intervenção humana, através de processos de pré-edição ou de pós-edição. Este trabalho parte das atividades desenvolvidas ao longo do estágio curricular na start-up Unbabel, concentrando-se especificamente na análise do processo de tradução automática, implementado na Unbabel, com vista a apresentar um contributo para melhorar a qualidade das traduções obtidas, em particular as traduções de inglês para italiano. A Unbabel é uma start-up portuguesa que oferece serviços de tradução quase em tempo real, combinando tradução automática com uma comunidade de revisores que assegura a pós-edição dos mesmos. O corpus utilizado na realização deste trabalho é composto por traduções automáticas de inglês para italiano, pós-editadas por revisores humanos de e-mails de apoio ao cliente. O processo de anotação visa identificar e categorizar erros em textos traduzidos automaticamente, o que, no contexto da Unbabel, é um processo feito por anotadores humanos. Analisou-se o processo de anotação e as ferramentas que permitem analisar e anotar os textos, o sistema que avalia a métrica de qualidade e as orientações que o anotador tem de seguir no processo de revisão. Este trabalho tornou possível identificar e categorizar os erros mais frequentes nos textos do nosso corpus. Um outro objetivo do presente trabalho consiste em analisar as instâncias dos tipos de erro mais frequentes, para entender quais as causas desta frequência e estabelecer generalizações que permitam elaborar regras suscetíveis de ser implementadas na ferramenta usada na Unbabel, para apoiar o trabalho dos editores e anotadores humanos com notificações automáticas. Em particular, o nosso trabalho foca-se em erros da categoria do registo, o mais frequente nos textos anotados considerados. Mais especificamente, o nosso estudo consiste em definir um conjunto de regras para melhorar a cobertura do Smartcheck, uma ferramenta usada na Unbabel para detetar automaticamente erros em textos traduzidos no âmbito dos fenómenos relacionados com a expressão de registo, para garantir melhores resultados depois do processo de pós-edição. O trabalho apresentado está dividido em oito capítulos. No primeiro capítulo, apresenta-se o objeto de estudo do trabalho, a metodologia usada na sua realização e a organização deste relatório. No segundo capítulo, apresenta-se uma panorâmica teórica sobre a área da tradução automática, sublinhando as características e as finalidades destes sistemas. Apresenta-se uma breve história da tradução automática, desde o surgimento desta área até hoje, bem como os diferentes paradigmas dos sistemas de tradução automática. No terceiro capítulo, apresenta-se a entidade de acolhimento do estágio que serviu de ponto de partida para este trabalho, a start-up portuguesa Unbabel. Explica-se o processo de tradução utilizado na empresa e as fases que o compõem, descrevendo-se detalhadamente os processos de pós-edição e de anotação humanas. São apresentadas também algumas informações sobre as ferramentas usadas na empresa para apoiar o processo de tradução, o Smartcheck e o Turbo Tagger. No quarto capítulo, apresenta-se o processo de anotação desenvolvido na Unbabel, como funciona e as orientações que o anotador deve seguir, descrevendo-se também alguns aspetos que podem ser melhorados. No quinto capítulo problematiza-se a questão do acordo entre anotadores, descrevendo-se a sua importância para medir a homogeneidade entre anotadores e, consequentemente, a fiabilidade de usar os dados de anotação para medir a eficácia e a qualidade dos sistemas de tradução automática. No sexto capítulo, identificam-se os erros mais frequentes por categoria de erro e destaca-se a categoria de registo, a mais frequente e com repercussões evidentes na fluência e na qualidade da tradução, por representar a voz e a imagem do cliente. Apresenta-se uma descrição de um conjunto de regras que pode ser implementado na ferramenta Smartcheck, com vista a diminuir a frequência do erro e aumentar a qualidade dos textos de chegada. Procede-se ainda à verificação do correto funcionamento das regras implementadas, apresentando-se exemplos ilustrativos do desempenho do Smartcheck, na sua versão de teste, com dados relevantes. No último capítulo deste trabalho, apresentam-se as conclusões e o trabalho futuro perspetivado com base neste projeto. Em conclusão, o objetivo do presente trabalho visa contribuir para a melhoria da qualidade dos textos traduzidos na entidade de acolhimento do estágio. Concretamente este trabalho constitui um contributo tangível para o aumento da precisão do processo de anotação humana e para a extensão da cobertura das ferramentas de apoio ao editor e ao anotador humanos usados na start-up Unbabel

    Disambiguoiva morfologinen jäsennys probabilistisilla sekvenssimalleilla

    Get PDF
    A morphological tagger is a computer program that provides complete morphological descriptions of sentences. Morphological taggers find applications in many NLP fields. For example, they can be used as a pre-processing step for syntactic parsers, in information retrieval and machine translation. The task of morphological tagging is closely related to POS tagging but morphological taggers provide more fine-grained morphological information than POS taggers. Therefore, they are often applied to morphologically complex languages, which extensively utilize inflection, derivation and compounding for encoding structural and semantic information. This thesis presents work on data-driven morphological tagging for Finnish and other morphologically complex languages. There exists a very limited amount of previous work on data-driven morphological tagging for Finnish because of the lack of freely available manually prepared morphologically tagged corpora. The work presented in this thesis is made possible by the recently published Finnish dependency treebanks FinnTreeBank and Turku Dependency Treebank. Additionally, the Finnish open-source morphological analyzer OMorFi is extensively utilized in the experiments presented in the thesis. The thesis presents methods for improving tagging accuracy, estimation speed and tagging speed in presence of large structured morphological label sets that are typical for morphologically complex languages. More specifically, it presents a novel formulation of generative morphological taggers using weighted finite-state machines and applies finite-state taggers to context sensitive spelling correction of Finnish. The thesis also explores discriminative morphological tagging. It presents structured sub-label dependencies that can be used for improving tagging accuracy. Additionally, the thesis presents a cascaded variant of the averaged perceptron tagger. In presence of large label sets, a cascaded design results in substantial reduction of estimation speed compared to a standard perceptron tagger. Moreover, the thesis explores pruning strategies for perceptron taggers. Finally, the thesis presents the FinnPos toolkit for morphological tagging. FinnPos is an open-source state-of-the-art averaged perceptron tagger implemented by the author.Disambiguoiva morfologinen jäsennin on ohjelma, joka tuottaa yksikäsitteisiä morfologisia kuvauksia virkkeen sanoille. Tällaisia jäsentimiä voidaan hyödyntää monilla kielenkäsittelyn osa-alueilla, esimerkiksi syntaktisen jäsentimen tai konekäännösjärjestelmän esikäsittelyvaiheena. Kieliteknologisena tehtävänä disambiguoiva morfologinen jäsennys muistuttaa perinteistä sanaluokkajäsennystä, mutta se tuottaa hienojakoisempaa morfologista informaatiota kuin perinteinen sanaluokkajäsennin. Tämän takia disambiguoivia morfologisia jäsentimiä hyödynnetäänkin pääsääntöisesti morfologisesti monimutkaisten kielten, kuten suomen kielen, kieliteknologiassa. Tällaisissa kielissä käytetään paljon sananmuodostuskeinoja kuten taivutusta, johtamista ja yhdyssananmuodostusta. Väitöskirjan esittelemä tutkimus liittyy morfologisesti rikkaiden kielten disambiguoivaan morfologiseen jäsentämiseen koneoppimismenetelmin. Vaikka suomen disambiguoivaa morfologista jäsentämistä on tutkittu aiemmin (esim. Constraint Grammar -formalismin avulla), koneoppimismenetelmiä ei ole aiemmin juurikaan sovellettu. Tämä johtuu siitä että jäsentimen oppimiseen tarvittavia korkealuokkaisia morfologisesti annotoituja korpuksia ei ole ollut avoimesti saatavilla. Tässä väitöskirjassa esitelty tutkimus hyödyntää vastikään julkaistuja suomen kielen dependenssijäsennettyjä FinnTreeBank ja Turku Dependency Treebank korpuksia. Lisäksi tutkimus hyödyntää suomen kielen avointa morfologista OMorFi-jäsennintä. Väitöskirja esittelee menetelmiä jäsennystarkkuuden parantamiseen ja jäsentimen opetusnopeuden sekä jäsennysnopeuden kasvattamiseen. Väitöskirja esittää uuden tavan rakentaa generatiivisia jäsentimiä hyödyntäen painollisia äärellistilaisia koneita ja soveltaa tällaisia jäsentimiä suomen kielen kontekstisensitiiviseen oikeinkirjoituksentarkistukseen. Lisäksi väitöskirja käsittelee diskriminatiivisia jäsennysmalleja. Se esittelee tapoja hyödyntää morfologisten analyysien osia jäsennystarkkuuden parantamiseen. Lisäksi se esittää kaskadimallin, jonka avulla jäsentimen opetusaika lyhenee huomattavasi. Väitöskirja esittää myös tapoja jäsenninmallien pienentämiseen. Lopuksi esitellään FinnPos, joka on kirjoittaman toteuttama avoimen lähdekoodin työkalu disambiguoivien morfologisten jäsentimien opettamiseen

    Statistical langauge models for alternative sequence selection

    No full text

    Predicting Linguistic Structure with Incomplete and Cross-Lingual Supervision

    Get PDF
    Contemporary approaches to natural language processing are predominantly based on statistical machine learning from large amounts of text, which has been manually annotated with the linguistic structure of interest. However, such complete supervision is currently only available for the world's major languages, in a limited number of domains and for a limited range of tasks. As an alternative, this dissertation considers methods for linguistic structure prediction that can make use of incomplete and cross-lingual supervision, with the prospect of making linguistic processing tools more widely available at a lower cost. An overarching theme of this work is the use of structured discriminative latent variable models for learning with indirect and ambiguous supervision; as instantiated, these models admit rich model features while retaining efficient learning and inference properties. The first contribution to this end is a latent-variable model for fine-grained sentiment analysis with coarse-grained indirect supervision. The second is a model for cross-lingual word-cluster induction and the application thereof to cross-lingual model transfer. The third is a method for adapting multi-source discriminative cross-lingual transfer models to target languages, by means of typologically informed selective parameter sharing. The fourth is an ambiguity-aware self- and ensemble-training algorithm, which is applied to target language adaptation and relexicalization of delexicalized cross-lingual transfer parsers. The fifth is a set of sequence-labeling models that combine constraints at the level of tokens and types, and an instantiation of these models for part-of-speech tagging with incomplete cross-lingual and crowdsourced supervision. In addition to these contributions, comprehensive overviews are provided of structured prediction with no or incomplete supervision, as well as of learning in the multilingual and cross-lingual settings. Through careful empirical evaluation, it is established that the proposed methods can be used to create substantially more accurate tools for linguistic processing, compared to both unsupervised methods and to recently proposed cross-lingual methods. The empirical support for this claim is particularly strong in the latter case; our models for syntactic dependency parsing and part-of-speech tagging achieve the hitherto best published results for a wide number of target languages, in the setting where no annotated training data is available in the target language

    Ensemble Morphosyntactic Analyser for Classical Arabic

    Get PDF
    Classical Arabic (CA) is an influential language for Muslim lives around the world. It is the language of two sources of Islamic laws: the Quran and the Sunnah, the collection of traditions and sayings attributed to the prophet Mohammed. However, classical Arabic in general, and the Sunnah, in particular, is underexplored and under-resourced in the field of computational linguistics. This study examines the possible directions for adapting existing tools, specifically morphological analysers, designed for modern standard Arabic (MSA) to classical Arabic. Morphological analysers of CA are limited, as well as the data for evaluating them. In this study, we adapt existing analysers and create a validation data-set from the Sunnah books. Inspired by the advances in deep learning and the promising results of ensemble methods, we developed a systematic method for transferring morphological analysis that is capable of handling different labelling systems and various sequence lengths. In this study, we handpicked the best four open access MSA morphological analysers. Data generated from these analysers are evaluated before and after adaptation through the existing Quranic Corpus and the Sunnah Arabic Corpus. The findings are as follows: first, it is feasible to analyse under-resourced languages using existing comparable language resources given a small sufficient set of annotated text. Second, analysers typically generate different errors and this could be exploited. Third, an explicit alignment of sequences and the mapping of labels is not necessary to achieve comparable accuracies given a sufficient size of training dataset. Adapting existing tools is easier than creating tools from scratch. The resulting quality is dependent on training data size and number and quality of input taggers. Pipeline architecture performs less well than the End-to-End neural network architecture due to error propagation and limitation on the output format. A valuable tool and data for annotating classical Arabic is made freely available
    corecore