69 research outputs found
Inferring Concept Hierarchies from Text Corpora via Hyperbolic Embeddings
We consider the task of inferring is-a relationships from large text corpora.
For this purpose, we propose a new method combining hyperbolic embeddings and
Hearst patterns. This approach allows us to set appropriate constraints for
inferring concept hierarchies from distributional contexts while also being
able to predict missing is-a relationships and to correct wrong extractions.
Moreover -- and in contrast with other methods -- the hierarchical nature of
hyperbolic space allows us to learn highly efficient representations and to
improve the taxonomic consistency of the inferred hierarchies. Experimentally,
we show that our approach achieves state-of-the-art performance on several
commonly-used benchmarks
Investigating Multi-source Active Learning for Natural Language Inference
In recent years, active learning has been successfully applied to an array of
NLP tasks. However, prior work often assumes that training and test data are
drawn from the same distribution. This is problematic, as in real-life settings
data may stem from several sources of varying relevance and quality. We show
that four popular active learning schemes fail to outperform random selection
when applied to unlabelled pools comprised of multiple data sources on the task
of natural language inference. We reveal that uncertainty-based strategies
perform poorly due to the acquisition of collective outliers, i.e.,
hard-to-learn instances that hamper learning and generalization. When outliers
are removed, strategies are found to recover and outperform random baselines.
In further analysis, we find that collective outliers vary in form between
sources, and show that hard-to-learn data is not always categorically harmful.
Lastly, we leverage dataset cartography to introduce difficulty-stratified
testing and find that different strategies are affected differently by example
learnability and difficulty.Comment: 23 pages. Accepted for publication at the European Chapter of the
Association of Computational Linguistics (EACL) 202
Anchor Points: Benchmarking Models with Much Fewer Examples
Modern language models often exhibit powerful but brittle behavior, leading
to the development of larger and more diverse benchmarks to reliably assess
their behavior. Here, we suggest that model performance can be benchmarked and
elucidated with much smaller evaluation sets. We first show that in six popular
language classification benchmarks, model confidence in the correct class on
many pairs of points is strongly correlated across models. We build upon this
phenomenon to propose Anchor Point Selection, a technique to select small
subsets of datasets that capture model behavior across the entire dataset.
Anchor points reliably rank models: across 87 diverse language model-prompt
pairs, evaluating models using 1-30 anchor points outperforms uniform sampling
and other baselines at accurately ranking models. Moreover, just several anchor
points can be used to estimate model per-class predictions on all other points
in a dataset with low mean absolute error, sufficient for gauging where the
model is likely to fail. Lastly, we present Anchor Point Maps for visualizing
these insights and facilitating comparisons of the performance of different
models on various regions within the dataset distribution
- …