3 research outputs found
n-Gram-based text compression
We propose an efficient method for compressing Vietnamese text using n-gram dictionaries. It has a significant compression ratio in comparison with those of state-of-the-art methods on the same dataset. Given a text, first, the proposed method splits it into n-grams and then encodes them based on n-gram dictionaries. In the encoding phase, we use a sliding window with a size that ranges from bigram to five grams to obtain the best encoding stream. Each n-gram is encoded by two to four bytes accordingly based on its corresponding n-gram dictionary. We collected 2.5 GB text corpus from some Vietnamese news agencies to build n-gram dictionaries from unigram to five grams and achieve dictionaries with a size of 12 GB in total. In order to evaluate our method, we collected a testing set of 10 different text files with different sizes. The experimental results indicate that our method achieves compression ratio around 90% and outperforms state-of-the-art methods.Web of Scienceart. no. 948364
Named Entity Recognition and Text Compression
Import 13/01/2017In recent years, social networks have become very popular. It is easy for users
to share their data using online social networks. Since data on social networks is
idiomatic, irregular, brief, and includes acronyms and spelling errors, dealing with
such data is more challenging than that of news or formal texts. With the huge
volume of posts each day, effective extraction and processing of these data will bring
great benefit to information extraction applications.
This thesis proposes a method to normalize Vietnamese informal text in social
networks. This method has the ability to identify and normalize informal text
based on the structure of Vietnamese words, Vietnamese syllable rules, and a trigram
model. After normalization, the data will be processed by a named entity
recognition (NER) model to identify and classify the named entities in these data.
In our NER model, we use six different types of features to recognize named entities
categorized in three predefined classes: Person (PER), Location (LOC), and
Organization (ORG).
When viewing social network data, we found that the size of these data are very
large and increase daily. This raises the challenge of how to decrease this size. Due
to the size of the data to be normalized, we use a trigram dictionary that is quite
big, therefore we also need to decrease its size. To deal with this challenge, in this
thesis, we propose three methods to compress text files, especially in Vietnamese
text. The first method is a syllable-based method relying on the structure of
Vietnamese morphosyllables, consonants, syllables and vowels. The second method
is trigram-based Vietnamese text compression based on a trigram dictionary. The
last method is based on an n-gram slide window, in which we use five dictionaries
for unigrams, bigrams, trigrams, four-grams and five-grams. This method achieves
a promising compression ratio of around 90% and can be used for any size of text file.In recent years, social networks have become very popular. It is easy for users
to share their data using online social networks. Since data on social networks is
idiomatic, irregular, brief, and includes acronyms and spelling errors, dealing with
such data is more challenging than that of news or formal texts. With the huge
volume of posts each day, effective extraction and processing of these data will bring
great benefit to information extraction applications.
This thesis proposes a method to normalize Vietnamese informal text in social
networks. This method has the ability to identify and normalize informal text
based on the structure of Vietnamese words, Vietnamese syllable rules, and a trigram
model. After normalization, the data will be processed by a named entity
recognition (NER) model to identify and classify the named entities in these data.
In our NER model, we use six different types of features to recognize named entities
categorized in three predefined classes: Person (PER), Location (LOC), and
Organization (ORG).
When viewing social network data, we found that the size of these data are very
large and increase daily. This raises the challenge of how to decrease this size. Due
to the size of the data to be normalized, we use a trigram dictionary that is quite
big, therefore we also need to decrease its size. To deal with this challenge, in this
thesis, we propose three methods to compress text files, especially in Vietnamese
text. The first method is a syllable-based method relying on the structure of
Vietnamese morphosyllables, consonants, syllables and vowels. The second method
is trigram-based Vietnamese text compression based on a trigram dictionary. The
last method is based on an n-gram slide window, in which we use five dictionaries
for unigrams, bigrams, trigrams, four-grams and five-grams. This method achieves
a promising compression ratio of around 90% and can be used for any size of text file.460 - Katedra informatikyvyhově
Topic Modeling on Online News.Portal Using Latent Dirichlet Allocation (LDA)
The amount of News displayed on online news portals. Often does not indicate the topic being discussed, but the News can be read and analyzed. You can find the main issues and trends in the News being discussed. It would be best if you had a quick and efficient way to find trending topics in the News. One of the methods that can be used to solve this problem is topic modeling. Theme modeling is necessary to allow users to easily and quickly understand modern themes' development. One of the algorithms in topic modeling is the Latent Dirichlet Allocation (LDA). This research stage begins with data collection, preprocessing, n-gram formation, dictionary representation, weighting, topic model validation, topic model formation, and topic modeling results. Based on the results of the topic evaluation, the. The best value of topic modeling using coherence was related to the number of passes. The number of topics produced 20 keys, five cases with a 0.53 coherence value. It can be said to be relatively stable based on the standard coherence value