20 research outputs found
SemEval-2016 Task 13: Taxonomy Extraction Evaluation (TExEval-2)
This paper describes the second edition of the shared task on Taxonomy Extraction Evaluation organised as part of SemEval 2016. This task aims to extract hypernym-hyponym relations between a given list of domain-specific terms and then to construct a domain taxonomy based on them. TExEval-2 introduced a multilingual setting for this task, covering four different languages including English, Dutch, Italian and French from domains as diverse as environment, food and science. A total of
62 runs submitted by 5 different teams were
evaluated using structural measures, by comparison with gold standard taxonomies and by manual quality assessment of novel relations.Science Foundation Ireland (SFI) under Grant Number SFI/12/RC/2289 (INSIGHT
Unsupervised Sense-Aware Hypernymy Extraction
In this paper, we show how unsupervised sense representations can be used to
improve hypernymy extraction. We present a method for extracting disambiguated
hypernymy relationships that propagates hypernyms to sets of synonyms
(synsets), constructs embeddings for these sets, and establishes sense-aware
relationships between matching synsets. Evaluation on two gold standard
datasets for English and Russian shows that the method successfully recognizes
hypernymy relationships that cannot be found with standard Hearst patterns and
Wiktionary datasets for the respective languages.Comment: In Proceedings of the 14th Conference on Natural Language Processing
(KONVENS 2018). Vienna, Austri
A supervised approach to taxonomy extraction using word embeddings
Large collections of texts are commonly generated by large organizations and making sense of these collections of texts is a significant challenge. One method for handling this is to organize the concepts into a hierarchical structure such that similar concepts can be discovered and easily browsed. This approach was the subject of a recent evaluation campaign, TExEval, however the results of this task showed that none of the systems consistently outperformed a relatively simple baseline.In order to solve this issue, we propose a new method that uses supervised learning to combine multiple features with a support vector machine classifier including the baseline features. We show that this outperforms the baseline and thus provides a stronger method for identifying taxonomic relations than previous method
Improving Hypernymy Extraction with Distributional Semantic Classes
In this paper, we show how distributionally-induced semantic classes can be
helpful for extracting hypernyms. We present methods for inducing sense-aware
semantic classes using distributional semantics and using these induced
semantic classes for filtering noisy hypernymy relations. Denoising of
hypernyms is performed by labeling each semantic class with its hypernyms. On
the one hand, this allows us to filter out wrong extractions using the global
structure of distributionally similar senses. On the other hand, we infer
missing hypernyms via label propagation to cluster terms. We conduct a
large-scale crowdsourcing study showing that processing of automatically
extracted hypernyms using our approach improves the quality of the hypernymy
extraction in terms of both precision and recall. Furthermore, we show the
utility of our method in the domain taxonomy induction task, achieving the
state-of-the-art results on a SemEval'16 task on taxonomy induction.Comment: In Proceedings of the 11th Conference on Language Resources and
Evaluation (LREC 2018). Miyazaki, Japa
The CogALex-V Shared Task on the Corpus-Based Identification of Semantic Relations
The shared task of the 5th Workshop on Cognitive Aspects of the Lexicon (CogALex-V) aims
at providing a common benchmark for testing current corpus-based methods for the identifica-
tion of lexical semantic relations (
synonymy
,
antonymy
,
hypernymy
,
part-whole meronymy
) and
at gaining a better understanding of their respective strengths and weaknesses. The shared task
uses a challenging dataset extracted from EVALution 1.0 (Santus et al., 2015b), which contains
word pairs holding the above-mentioned relations as well as semantically unrelated control items
(
random
). The task is split into two subtasks: (i) identification of related word pairs vs. unre-
lated ones; (ii) classification of the word pairs according to their semantic relation. This paper
describes the subtasks, the dataset, the evaluation metrics, the seven participating systems and
their results. The best performing system in subtask 1 is GHHH (
F
1
= 0
.
790
), while the best
system in subtask 2 is LexNet (
F
1
= 0
.
445
). The dataset and the task description are available at
https://sites.google.com/site/cogalex2016/home/shared-task
Meemi: A Simple Method for Post-processing and Integrating Cross-lingual Word Embeddings
Word embeddings have become a standard resource in the toolset of any Natural
Language Processing practitioner. While monolingual word embeddings encode
information about words in the context of a particular language, cross-lingual
embeddings define a multilingual space where word embeddings from two or more
languages are integrated together. Current state-of-the-art approaches learn
these embeddings by aligning two disjoint monolingual vector spaces through an
orthogonal transformation which preserves the structure of the monolingual
counterparts. In this work, we propose to apply an additional transformation
after this initial alignment step, which aims to bring the vector
representations of a given word and its translations closer to their average.
Since this additional transformation is non-orthogonal, it also affects the
structure of the monolingual spaces. We show that our approach both improves
the integration of the monolingual spaces as well as the quality of the
monolingual spaces themselves. Furthermore, because our transformation can be
applied to an arbitrary number of languages, we are able to effectively obtain
a truly multilingual space. The resulting (monolingual and multilingual) spaces
show consistent gains over the current state-of-the-art in standard intrinsic
tasks, namely dictionary induction and word similarity, as well as in extrinsic
tasks such as cross-lingual hypernym discovery and cross-lingual natural
language inference.Comment: 22 pages, 2 figures, 9 tables. Preprint submitted to Natural Language
Engineerin