2 research outputs found
Parsimonious Morpheme Segmentation with an Application to Enriching Word Embeddings
Traditionally, many text-mining tasks treat individual word-tokens as the
finest meaningful semantic granularity. However, in many languages and
specialized corpora, words are composed by concatenating semantically
meaningful subword structures. Word-level analysis cannot leverage the semantic
information present in such subword structures. With regard to word embedding
techniques, this leads to not only poor embeddings for infrequent words in
long-tailed text corpora but also weak capabilities for handling
out-of-vocabulary words. In this paper we propose MorphMine for unsupervised
morpheme segmentation. MorphMine applies a parsimony criterion to
hierarchically segment words into the fewest number of morphemes at each level
of the hierarchy. This leads to longer shared morphemes at each level of
segmentation. Experiments show that MorphMine segments words in a variety of
languages into human-verified morphemes. Additionally, we experimentally
demonstrate that utilizing MorphMine morphemes to enrich word embeddings
consistently improves embedding quality on a variety of of embedding
evaluations and a downstream language modeling task
Taking Notes on the Fly Helps BERT Pre-training
How to make unsupervised language pre-training more efficient and less
resource-intensive is an important research direction in NLP. In this paper, we
focus on improving the efficiency of language pre-training methods through
providing better data utilization. It is well-known that in language data
corpus, words follow a heavy-tail distribution. A large proportion of words
appear only very few times and the embeddings of rare words are usually poorly
optimized. We argue that such embeddings carry inadequate semantic signals,
which could make the data utilization inefficient and slow down the
pre-training of the entire model. To mitigate this problem, we propose Taking
Notes on the Fly (TNF), which takes notes for rare words on the fly during
pre-training to help the model understand them when they occur next time.
Specifically, TNF maintains a note dictionary and saves a rare word's
contextual information in it as notes when the rare word occurs in a sentence.
When the same rare word occurs again during training, the note information
saved beforehand can be employed to enhance the semantics of the current
sentence. By doing so, TNF provides better data utilization since
cross-sentence information is employed to cover the inadequate semantics caused
by rare words in the sentences. We implement TNF on both BERT and ELECTRA to
check its efficiency and effectiveness. Experimental results show that TNF's
training time is less than its backbone pre-training models when
reaching the same performance. When trained with the same number of iterations,
TNF outperforms its backbone methods on most of downstream tasks and the
average GLUE score. Source code is attached in the supplementary material.Comment: Qiyu Wu and Chen Xing contribute equall