2 research outputs found
Second-Order Word Embeddings from Nearest Neighbor Topological Features
We introduce second-order vector representations of words, induced from
nearest neighborhood topological features in pre-trained contextual word
embeddings. We then analyze the effects of using second-order embeddings as
input features in two deep natural language processing models, for named entity
recognition and recognizing textual entailment, as well as a linear model for
paraphrase recognition. Surprisingly, we find that nearest neighbor information
alone is sufficient to capture most of the performance benefits derived from
using pre-trained word embeddings. Furthermore, second-order embeddings are
able to handle highly heterogeneous data better than first-order
representations, though at the cost of some specificity. Additionally,
augmenting contextual embeddings with second-order information further improves
model performance in some cases. Due to variance in the random initializations
of word embeddings, utilizing nearest neighbor features from multiple
first-order embedding samples can also contribute to downstream performance
gains. Finally, we identify intriguing characteristics of second-order
embedding spaces for further research, including much higher density and
different semantic interpretations of cosine similarity.Comment: Submitted to NIPS 2017. (8 pages + 4 reference
Morphological Skip-Gram: Using morphological knowledge to improve word representation
Natural language processing models have attracted much interest in the deep
learning community. This branch of study is composed of some applications such
as machine translation, sentiment analysis, named entity recognition, question
and answer, and others. Word embeddings are continuous word representations,
they are an essential module for those applications and are generally used as
input word representation to the deep learning models. Word2Vec and GloVe are
two popular methods to learn word embeddings. They achieve good word
representations, however, they learn representations with limited information
because they ignore the morphological information of the words and consider
only one representation vector for each word. This approach implies that
Word2Vec and GloVe are unaware of the word inner structure. To mitigate this
problem, the FastText model represents each word as a bag of characters
n-grams. Hence, each n-gram has a continuous vector representation, and the
final word representation is the sum of its characters n-grams vectors.
Nevertheless, the use of all n-grams character of a word is a poor approach
since some n-grams have no semantic relation with their words and increase the
amount of potentially useless information. This approach also increases the
training phase time. In this work, we propose a new method for training word
embeddings, and its goal is to replace the FastText bag of character n-grams
for a bag of word morphemes through the morphological analysis of the word.
Thus, words with similar context and morphemes are represented by vectors close
to each other. To evaluate our new approach, we performed intrinsic evaluations
considering 15 different tasks, and the results show a competitive performance
compared to FastText.Comment: 11 page