3,902 research outputs found

    To Normalize, or Not to Normalize: The Impact of Normalization on Part-of-Speech Tagging

    Full text link
    Does normalization help Part-of-Speech (POS) tagging accuracy on noisy, non-canonical data? To the best of our knowledge, little is known on the actual impact of normalization in a real-world scenario, where gold error detection is not available. We investigate the effect of automatic normalization on POS tagging of tweets. We also compare normalization to strategies that leverage large amounts of unlabeled data kept in its raw form. Our results show that normalization helps, but does not add consistently beyond just word embedding layer initialization. The latter approach yields a tagging model that is competitive with a Twitter state-of-the-art tagger.Comment: In WNUT 201

    Natural language processing for similar languages, varieties, and dialects: A survey

    Get PDF
    There has been a lot of recent interest in the natural language processing (NLP) community in the computational processing of language varieties and dialects, with the aim to improve the performance of applications such as machine translation, speech recognition, and dialogue systems. Here, we attempt to survey this growing field of research, with focus on computational methods for processing similar languages, varieties, and dialects. In particular, we discuss the most important challenges when dealing with diatopic language variation, and we present some of the available datasets, the process of data collection, and the most common data collection strategies used to compile datasets for similar languages, varieties, and dialects. We further present a number of studies on computational methods developed and/or adapted for preprocessing, normalization, part-of-speech tagging, and parsing similar languages, language varieties, and dialects. Finally, we discuss relevant applications such as language and dialect identification and machine translation for closely related languages, language varieties, and dialects.Non peer reviewe

    PoS Tagging, Lemmatization and Dependency Parsing of West Frisian

    Get PDF
    We present a lemmatizer/PoS tagger/dependency parser for West Frisian using a corpus of 44,714 words in 3,126 sentences that were annotated according to the guidelines of Universal Dependencies version 2. PoS tags were assigned to words by using a Dutch PoS tagger that was applied to a Dutch word-by-word translation, or to sentences of a Dutch parallel text. Best results were obtained when using word-by-word translations that were created by using the previous version of the Frisian translation program Oersetter. Morphologic and syntactic annotations were generated on the basis of a Dutch word-by-word translation as well. The performance of the lemmatizer/tagger/annotator when it was trained using default parameters was compared to the performance that was obtained when using the parameter values that were used for training the LassySmall UD 2.5 corpus. We study the effects of different hyperparameter settings on the accuracy of the annotation pipeline. The Frisian lemmatizer/PoS tagger/dependency parser is released as a web app and as a web service

    New Developments in Tagging Pre-modern Orthodox Slavic Texts

    Get PDF
    Pre-modern Orthodox Slavic texts pose certain difficulties when it comes to part-of-speech and full morphological tagging. Orthographic and morphological heterogeneity makes it hard to apply resources that rely on normalized data, which is why previous attempts to train part-of-speech (POS) taggers for pre-modern Slavic often apply normalization routines. In the current paper, we further explore the normalization path; at the same time, we use the statistical CRF-tagger MarMoT and a newly developed neural network tagger that cope better with variation than previously applied rule-based or statistical taggers. Furthermore, we conduct transfer experiments to apply Modern Russian resources to pre-modern data. Our experiments show that while transfer experiments could not improve tagging performance significantly, state-of-the-art taggers reach between 90% and more than 95% tagging accuracy and thus approach the tagging accuracy of modern standard languages with rich morphology. Remarkably, these results are achieved without the need for normalization, which makes our research of practical relevance to the Paleoslavistic community.Peer reviewe

    PoS Tagging, Lemmatization and Dependency Parsing of West Frisian

    Get PDF
    We present a lemmatizer/POS-tagger/dependency parser for West Frisian using a corpus of 44,714 words in 3,126 sentences that were annotated according to the guidelines of Universal Dependency version 2. POS tags were assigned to words by using a Dutch POS tagger that was applied to a literal word-by-word translation, or to sentences of a Dutch parallel text. Best results were obtained when using literal translations that were created by using the Frisian translation program Oersetter. Morphologic and syntactic annotations were generated on the basis of a literal Dutch translation as well. The performance of the lemmatizer/tagger/annotator when it was trained using default parameters was compared to the performance that was obtained when using the parameter values that were used for training the LassySmall UD 2.5 corpus. A significant improvement was found for `lemma'. The Frisian lemmatizer/PoS tagger/dependency parser is released as a web app and as a web service.Comment: 6 pages, 2 figures, 6 table

    La enseñanza de la traducción especializada. Corpus textuales de traductores en formación con etiquetado de errores

    Get PDF
    This paper describes the method used in teaching specialised translation in the English Language Translation Master’s programme at Masaryk University. After a brief description of the courses, the focus shifts to translation learner corpora (TLC) compiled in the new Hypal interface, which can be integrated in Moodle. Student translations are automatically aligned (with possible adjustments), PoS (part-of-speech) tagged, and manually error-tagged. Personal student reports based on error statistics for individual translations to show students’ progress throughout the term or during their studies in the four-semester programme can be easily generated. Using the data from the pilot run of the new software, the paper concludes with the first results of the research examining a learner corpus of translations from Czech into English.En el presente trabajo se describe el método que se ha seguido para enseñar traducción especializada en el Máster de Traducción en Lengua Inglesa que se imparte en la Universidad de Masaryk. Tras una breve descripción de las asignaturas, nos centramos en corpus textuales de traductores en formación (translation learner corpora, TLC) recopilado en la nueva interfaz Hypal, que se puede incorporar en Moodle. Las traducciones realizadas por los alumnos se alinean de forma automática (con posibles modificaciones) y reciben un etiquetado gramatical y un etiquetado manual de errores. Es posible generar de manera sencilla informes sobre los alumnos con información estadística sobre errores en las traducciones individuales para mostrar su progreso durante el cuatrimestre o el programa completo. En función de los datos obtenidos en la prueba piloto del nuevo software, este trabajo presenta los primeros resultados del estudio a través de un corpus de traducciones de aprendices del checo al inglés

    Teaching Specialized Translation Error-tagged Translation Learner Corpora

    Get PDF
    This paper describes the method used in teaching specialised translation in the English Language Translation Master’s programme at Masaryk University. After a brief description of the courses, the focus shifts to translation learner corpora (TLC) compiled in the new Hypal interface, which can be integrated in Moodle. Student translations are automatically aligned (with possible adjustments), PoS (part-of-speech) tagged, and manually error-tagged. Personal student reports based on error statistics for individual translations to show students’progress throughout the term or during their studies in the four-semester programme can be easily generated. Using the data from the pilot run of the new software, the paper concludes with the first results of the research examining a learner corpus of translations from Czech into English.Článek popisuje metodu používanou při výuce odborného překladu v magisterském studijním programu Překladatelství anglického jazyka na Masarykově univerzitě. Po krátkém popisu kurzů se zaměřuje na korpus překladů studentů (TLC) sestavený v novém rozhraní programu Hypal, které lze integrovat do systému Moodle. Studentské překlady jsou automaticky zarovnány (s možnými úpravami), označkovány podle slovního druhu a nakonec jsou ručně označeny chyby. Program umožňuje sledovat práci jednotlivých studentů pomocí statistik chyb pro jednotlivé překlady v průběhu semestru nebo během studia ve čtyřsemestrovém programu. Na závěr jsou v článku uvedena data z pilotního běhu nového softwaru s prvními výsledky výzkumu korpusu překladů z češtiny do angličtiny
    corecore