73 research outputs found
Hypothesis Testing based Intrinsic Evaluation of Word Embeddings
We introduce the cross-match test - an exact, distribution free,
high-dimensional hypothesis test as an intrinsic evaluation metric for word
embeddings. We show that cross-match is an effective means of measuring
distributional similarity between different vector representations and of
evaluating the statistical significance of different vector embedding models.
Additionally, we find that cross-match can be used to provide a quantitative
measure of linguistic similarity for selecting bridge languages for machine
translation. We demonstrate that the results of the hypothesis test align with
our expectations and note that the framework of two sample hypothesis testing
is not limited to word embeddings and can be extended to all vector
representations.Comment: Accepted to RepEval 2017: The Second Workshop on Evaluating Vector
Space Representations for NL
The Interplay of Semantics and Morphology in Word Embeddings
We explore the ability of word embeddings to capture both semantic and
morphological similarity, as affected by the different types of linguistic
properties (surface form, lemma, morphological tag) used to compose the
representation of each word. We train several models, where each uses a
different subset of these properties to compose its representations. By
evaluating the models on semantic and morphological measures, we reveal some
useful insights on the relationship between semantics and morphology
Characterizing the impact of geometric properties of word embeddings on task performance
Analysis of word embedding properties to inform their use in downstream NLP
tasks has largely been studied by assessing nearest neighbors. However,
geometric properties of the continuous feature space contribute directly to the
use of embedding features in downstream models, and are largely unexplored. We
consider four properties of word embedding geometry, namely: position relative
to the origin, distribution of features in the vector space, global pairwise
distances, and local pairwise distances. We define a sequence of
transformations to generate new embeddings that expose subsets of these
properties to downstream models and evaluate change in task performance to
understand the contribution of each property to NLP models. We transform
publicly available pretrained embeddings from three popular toolkits (word2vec,
GloVe, and FastText) and evaluate on a variety of intrinsic tasks, which model
linguistic information in the vector space, and extrinsic tasks, which use
vectors as input to machine learning models. We find that intrinsic evaluations
are highly sensitive to absolute position, while extrinsic tasks rely primarily
on local similarity. Our findings suggest that future embedding models and
post-processing techniques should focus primarily on similarity to nearby
points in vector space.Comment: Appearing in the Third Workshop on Evaluating Vector Space
Representations for NLP (RepEval 2019). 7 pages + reference
- …