10 research outputs found
Characterizing the impact of geometric properties of word embeddings on task performance
Analysis of word embedding properties to inform their use in downstream NLP
tasks has largely been studied by assessing nearest neighbors. However,
geometric properties of the continuous feature space contribute directly to the
use of embedding features in downstream models, and are largely unexplored. We
consider four properties of word embedding geometry, namely: position relative
to the origin, distribution of features in the vector space, global pairwise
distances, and local pairwise distances. We define a sequence of
transformations to generate new embeddings that expose subsets of these
properties to downstream models and evaluate change in task performance to
understand the contribution of each property to NLP models. We transform
publicly available pretrained embeddings from three popular toolkits (word2vec,
GloVe, and FastText) and evaluate on a variety of intrinsic tasks, which model
linguistic information in the vector space, and extrinsic tasks, which use
vectors as input to machine learning models. We find that intrinsic evaluations
are highly sensitive to absolute position, while extrinsic tasks rely primarily
on local similarity. Our findings suggest that future embedding models and
post-processing techniques should focus primarily on similarity to nearby
points in vector space.Comment: Appearing in the Third Workshop on Evaluating Vector Space
Representations for NLP (RepEval 2019). 7 pages + reference
Relation Extraction Datasets in the Digital Humanities Domain and their Evaluation with Word Embeddings
In this research, we manually create high-quality datasets in the digital
humanities domain for the evaluation of language models, specifically word
embedding models. The first step comprises the creation of unigram and n-gram
datasets for two fantasy novel book series for two task types each, analogy and
doesn't-match. This is followed by the training of models on the two book
series with various popular word embedding model types such as word2vec, GloVe,
fastText, or LexVec. Finally, we evaluate the suitability of word embedding
models for such specific relation extraction tasks in a situation of comparably
small corpus sizes. In the evaluations, we also investigate and analyze
particular aspects such as the impact of corpus term frequencies and task
difficulty on accuracy. The datasets, and the underlying system and word
embedding models are available on github and can be easily extended with new
datasets and tasks, be used to reproduce the presented results, or be
transferred to other domains