12 research outputs found
Initial Experiments on Russian to Kazakh SMT
We present our initial experiments on Russian to Kazakh phrase-based
statistical machine translation. Following a common approach to SMT between
morphologically rich languages, we employ morphological processing techniques.
Namely, for our initial experiments, we perform source-side lemmatization. Given
a rather humble-sized parallel corpus at hand, we also put some effort in data
cleaning and investigate the impact of data quality vs. quantity trade off on the
overall performance. Although our experiments mostly focus on source side preprocessing we achieve a substantial, statistically significant improvement over the
baseline that operates on raw, unprocessed data
DEEP LEARNING MODEL FOR BILINGUAL SENTIMENT CLASSIFICATION OF SHORT TEXTS
Sentiment analysis of short texts such as Twitter messages and comments in news portals is challenging due to the lack of contextual information. We propose a deep neural network model that uses bilingual word embeddings to effectively solve sentiment classification problem for a given pair of languages. We apply our approach to two corpora of two different language pairs: English-Russian and Russian-Kazakh. We show how to train a classifier in one language and predict in another. Our approach achieves 73% accuracy for English and 74% accuracy for Russian. For Kazakh sentiment analysis, we propose a baseline method, that achieves 60% accuracy; and a method to learn bilingual embeddings from a large unlabeled corpus using a bilingual word pairs
Creating a morphological and syntactic tagged corpus for the Uzbek language
Nowadays, creation of the tagged corpora is becoming one of the most
important tasks of Natural Language Processing (NLP). There are not enough
tagged corpora to build machine learning models for the low-resource Uzbek
language. In this paper, we tried to fill that gap by developing a novel Part
Of Speech (POS) and syntactic tagset for creating the syntactic and
morphologically tagged corpus of the Uzbek language. This work also includes
detailed description and presentation of a web-based application to work on a
tagging as well. Based on the developed annotation tool and the software, we
share our experience results of the first stage of the tagged corpus creatio
Method for Determining the Similarity of Text Documents for the Kazakh language, Taking Into Account Synonyms: Extension to TF-IDF
The task of determining the similarity of text documents has received
considerable attention in many areas such as Information Retrieval, Text
Mining, Natural Language Processing (NLP) and Computational Linguistics.
Transferring data to numeric vectors is a complex task where algorithms such as
tokenization, stopword filtering, stemming, and weighting of terms are used.
The term frequency - inverse document frequency (TF-IDF) is the most widely
used term weighting method to facilitate the search for relevant documents. To
improve the weighting of terms, a large number of TF-IDF extensions are made.
In this paper, another extension of the TF-IDF method is proposed where
synonyms are taken into account. The effectiveness of the method is confirmed
by experiments on functions such as Cosine, Dice and Jaccard to measure the
similarity of text documents for the Kazakh language.Comment: 2022 International Conference on Smart Information Systems and
Technologies (SIST
Character-based Deep Learning Models for Token and Sentence Segmentation
In this work we address the problems of sentence segmentation and tokenization. Informally the task of sentence segmentation involves splitting a given text into units that satisfy a certain definition (or a number of definitions) of a sentence. Similarly, tokenization has as its goal splitting a text into chunks that for a certain task constitute basic units of operation, e.g. words, digits, punctuation marks and other symbols for part of speech tagging. As seen from the definition, tokenization is an absolute prerequisite for virtually every natural language processing (NLP) task. Many of so called downstream NLP applications with higher level of sophistication, e.g. machine translation, additionally require sentence segmentation. Thus both of the problems that we address are the very basic steps in NLP and, as such, are widely regarded as solved problems. Indeed there is a large body of work devoted to these problems, and there is a number of popular, highly accurate off the shelf solutions for them. Nevertheless, the problems of sentence segmentation and tokenization persist, and in practice one often faces certain difficulties whenever confronted with raw text that needs to be tokenized and/or split into sentences. This happens because existing approaches, if they are unsupervised, rely heavily on hand-crafted rules and lexicons, or, if they are supervised, rely on extraction of hand-engineered features. Such systems are not easy to maintain and adapt to new domains and languages because for those one may need to revise the rules and feature definitions.
In order to address the aforementioned challenges, we develop character-based deep learning models which require neither rule nor feature engineering. The only resource required is a training set, where each character is labeled with an IOB (Inside Outside Beginning) tag. Such training sets are easily attainable from existing tokenized and sentence-segmented corpora, or, in absence of those, have to be created (but the same is true for rules, lexicons, and hand-crafted features). The IOB-like annotation allows us to solve both tokenization and sentence segmentation problems simultaneously casting them as a single sequence-labeling task, where each character has to be tagged with one of four tags: beginning of a sentence (S), beginning of a token (T), inside of a token (I) and outside of a token (O). To this end we design three models based on artificial neural networks: (i) a fully connected feed forward network; (ii) long short term memory (LSTM) network; (iii) bi-directional version of LSTM. The proposed models utilize character embeddings, i.e. represent characters as vectors in a multidimensional continuous space.
We evaluate our approach on three typologically distant languages, namely English, Italian, and Kazakh. In terms of evaluation metrics we use standard precision, recall, and F-measure scores, as well as combined error rate for sentence and token boundary detection. We use two state of the art supervised systems as baselines, and show that our models consistently outperform both of them in terms of error rate
A free/open-source hybrid morphological disambiguation tool for Kazakh
This paper presents the results of developing a
morphological disambiguation tool for Kazakh. Starting with a
previously developed rule-based approach, we tried to cope with
the complex morphology of Kazakh by breaking up lexical forms
across their derivational boundaries into inflectional groups
and modeling their behavior with statistical methods. A hybrid
rule-based/statistical approach appears to benefit morphological
disambiguation demonstrating a per-token accuracy of 91% in
running text
A free/open-source hybrid morphological disambiguation tool for Kazakh
This paper presents the results of developing a
morphological disambiguation tool for Kazakh. Starting with a
previously developed rule-based approach, we tried to cope with
the complex morphology of Kazakh by breaking up lexical forms
across their derivational boundaries into inflectional groups
and modeling their behavior with statistical methods. A hybrid
rule-based/statistical approach appears to benefit morphological
disambiguation demonstrating a per-token accuracy of 91% in
running text