12 research outputs found

    Part-of-speech tagger for Malay social media texts

    Get PDF
    Processing the meaning of words in social media texts, such as tweets, is challenging in natural language processing. Malay tweets are no exception because they demonstrate distinct linguistic phenomena, such as the use of dialects from each state in Malaysia; borrowing foreign language terms in the context of Malay language; and using mixed languages, abbreviations and spelling errors or mistakes in sentence structure. Tagging the word class of tweets is an arduous task because tweets are characterised by their distinctive style, linguistic sounds and errors. Currently, existing works on Malay part-of-speech (POS) are based only on standard Malay and formal texts and are thus unsuitable for tagging tweet texts. Thus, a POS model of tweet tagging for non-standardised Malay language must be developed. This study aims to design and implement a non-standardised Malay POS model for tweets and performs assessment on the basis of the word tagging accuracy of test data of unnormalised and normalised tweet texts. A solution that adopts a probabilistic POS tagging called QTAG is proposed. Results show that the Malay QTAG achieves best average POS tagging accuracies of 90% and 88.8% for normalised and unnormalised test datasets, respectively

    Slavic corpus and computational linguistics

    Get PDF
    In this paper, we focus on corpus-linguistic studies that address theoretical questions and on computational linguistic work on corpus annotation, that makes corpora useful for linguistic work. First, we discuss why the corpus linguistic approach was discredited by generative linguists in the second half of the 20th century, how it made a comeback through advances in computing and was adopted by usage-based linguistics at the beginning of the 21st century. Then, we move on to an overview of necessary and common annotation layers and the issues that are encountered when performing automatic annotation, with special emphasis on Slavic languages. Finally, we survey the types of research requiring corpora that Slavic linguists are involved in world-wide, and the resources they have at their disposal

    Development of part-of-speech tagger for Xhosa

    Get PDF
    Part-of-Speech (POS) tagging is a process of assigning an appropriate part of speech or lexical category to each word in a given sentence of a particular natural language. Natural languages are languages that human beings use to communicate with one another be it Xhosa, Zulu, English etc. POS tagging plays a huge and important role in natural language processing applications. The main applications of POS tagging include machine translation, parsing, text chunking, spell checkiXhosa (sometimes referred to as isiXhosa) is one of the eleven official languages of South Africa and is spoken by over 8 million South Africans. The language is mainly spoken in the Eastern Cape and Western Cape provinces of the country. It is the second most widely spoken native language in South Africa after Zulu (sometimes called isiZulu). Although the number of speakers might seem to be high, Xhosa is considerably under-resourced. There are very few publications in Xhosa, very few books have been published in the language and also the domains that use the language as a medium of instruction are very limited. However, the language is finding momentum nowadays. An Oxford approved Xhosa dictionary has been developed recently, and Xhosa newspapers that did not exist in the recent past are now published. Text from previously mentioned sources can then be combined to formulate a larger text that can be used to train the tagger. This work aims to develop an effective POS tagger for Xhosa. g and grammar. This thesis presents/describes the work that needed to be done to produce an automatic POS tagger for Xhosa. A tagset consisting of 36 POS tags/labels for the language were used for this purpose. These are listed. A total of 5000 words were manually tagged/labelled for the purpose of training the tagger. Another 3000 words were used for testing the tagger and these were disjoint from the manually tagged training data. The open source Stanford CoreNLP toolkit was used to create the tagger. The toolkit implements a Maximum Entropy machine learning model which was applied in the development of the tagger presented in this thesis. The thesis describes the implementation and testing processes of the model in detail. The results show that the development of the Xhosa POS tagging model was successful. This model managed to obtain a tagging accuracy of 87.71 percent

    Statistical modeling of agglutinative languages

    Get PDF
    Ankara : Department of Computer Engineering and the Institute of Engineering and Science of Bilkent Univ., 2000.Thesis (Ph.D.) -- Bilkent University, 2000.Includes bibliographical references leaves 107-116Hakkani-Tür, Dilek ZPh.D

    A morphological-syntactical analysis approach for Arabic textual tagging

    Get PDF
    Part-of-Speech (POS) tagging is the process of labeling or classifying each word in written text with its grammatical category or part-of-speech, i.e. noun, verb, preposition, adjective, etc. It is the most common disambiguation process in the field of Natural Language Processing (NLP). POS tagging systems are often preprocessors in many NLP applications. The Arabic language has a valuable and an important feature, called diacritics, which are marks placed over and below the letters of the word. An Arabic text is partiallyvocalisedl when the diacritical mark is assigned to one or maximum two letters in the word. Diacritics in Arabic texts are extremely important especially at the end of the word. They help determining not only the correct POS tag for each word in the sentence, but also in providing full information regarding the inflectional features, such as tense, number, gender, etc. for the sentence words. They add semantic information to words which helps with resolving ambiguity in the meaning of words. Furthermore, diacritics ascribe grammatical functions to the words, differentiating the word from other words, and determining the syntactic position of the word in the sentence. 1. Vocalisation (also referred as diacritisation or vowelisation). This thesis presents a rule-based Part-of-Speech tagging system called AMT - short for Arabic Morphosyntactic Tagger. The main function of the AMT system is to assign the correct tag to each word in an untagged raw partially-vocalised Arabic corpus, and to produce a POS tagged corpus without using a manually tagged or untagged lexicon (dictionary) for training. Two different techniques were used in this work, the pattem-based technique and the lexical and contextual technique. The rules in the pattem-based technique technique are based on the pattern of the testing word. A novel algorithm, Pattern-Matching Algorithm (PMA), has been designed and introduced in this work. The aim of this algorithm is to match the testing word with its correct pattern in pattern lexicon. The lexical and contextual technique on the other hand is used to assist the pattembased technique technique to assign the correct tag to those words not have a pattern to follow. The rules in the lexical and contextual technique are based on the character(s), the last diacritical mark, the word itself, and the tags of the surrounding words. The importance of utilizing the diacritic feature of the Arabic language to reduce the lexical ambiguity in POS tagging has been addressed. In addition, a new Arabic tag set and a new partially-vocalised Arabic corpus to test AMT have been compiled and presented in this work. The AMT system has achieved an average accuracy of 91 %

    Open-source resources and standards for Arabic word structure analysis: Fine grained morphological analysis of Arabic text corpora

    Get PDF
    Morphological analyzers are preprocessors for text analysis. Many Text Analytics applications need them to perform their tasks. The aim of this thesis is to develop standards, tools and resources that widen the scope of Arabic word structure analysis - particularly morphological analysis, to process Arabic text corpora of different domains, formats and genres, of both vowelized and non-vowelized text. We want to morphologically tag our Arabic Corpus, but evaluation of existing morphological analyzers has highlighted shortcomings and shown that more research is required. Tag-assignment is significantly more complex for Arabic than for many languages. The morphological analyzer should add the appropriate linguistic information to each part or morpheme of the word (proclitic, prefix, stem, suffix and enclitic); in effect, instead of a tag for a word, we need a subtag for each part. Very fine-grained distinctions may cause problems for automatic morphosyntactic analysis – particularly probabilistic taggers which require training data, if some words can change grammatical tag depending on function and context; on the other hand, finegrained distinctions may actually help to disambiguate other words in the local context. The SALMA – Tagger is a fine grained morphological analyzer which is mainly depends on linguistic information extracted from traditional Arabic grammar books and prior knowledge broad-coverage lexical resources; the SALMA – ABCLexicon. More fine-grained tag sets may be more appropriate for some tasks. The SALMA –Tag Set is a theory standard for encoding, which captures long-established traditional fine-grained morphological features of Arabic, in a notation format intended to be compact yet transparent. The SALMA – Tagger has been used to lemmatize the 176-million words Arabic Internet Corpus. It has been proposed as a language-engineering toolkit for Arabic lexicography and for phonetically annotating the Qur’an by syllable and primary stress information, as well as, fine-grained morphological tagging

    Argumentative zoning information extraction from scientific text

    Get PDF
    Let me tell you, writing a thesis is not always a barrel of laughs—and strange things can happen, too. For example, at the height of my thesis paranoia, I had a re-current dream in which my cat Amy gave me detailed advice on how to restructure the thesis chapters, which was awfully nice of her. But I also had a lot of human help throughout this time, whether things were going fine or beserk. Most of all, I want to thank Marc Moens: I could not have had a better or more knowledgable supervisor. He always took time for me, however busy he might have been, reading chapters thoroughly in two days. He both had the calmness of mind to give me lots of freedom in research, and the right judgement to guide me away, tactfully but determinedly, from the occasional catastrophe or other waiting along the way. He was great fun to work with and also became a good friend. My work has profitted from the interdisciplinary, interactive and enlightened atmosphere at the Human Communication Centre and the Centre for Cognitive Science (which is now called something else). The Language Technology Group was a great place to work in, as my research was grounded in practical applications develope
    corecore