1,375 research outputs found
Taking antonymy mask off in vector space
Automatic detection of antonymy is an important task in Natural Language Processing (NLP) for Information Retrieval (IR), Ontology Learning (OL) and many other semantic applications. However, current unsupervised approaches to antonymy detection are still not fully effective because they cannot discriminate antonyms from synonyms. In this paper, we introduce APAnt, a new Average-Precision-based measure for the unsupervised discrimination of antonymy from synonymy using Distributional Semantic Models (DSMs). APAnt makes use of Average Precision to estimate the extent and salience of the intersection among the most descriptive contexts of two target words. Evaluation shows that the proposed method is able to distinguish antonyms and synonyms with high accuracy across different parts of speech, including nouns, adjectives and verbs. APAnt outperforms the vector cosine and a baseline model implementing the co-occurrence hypothesis
Recommended from our members
Beyond definition: Organising semantic information in bilingual dictionaries
This paper considers the process of organising semantic information in bilingual dictionaries with diachronic coverage, from selecting the textual source-material to designing the entries. The discussion centres on practical aspects of ancient Greek lexicography. First, the traditional semantic frameworks are described. Then, more recent approaches are noted, notably those of Adrados and of Chadwick, both of which aim to integrate contextual data within a semantic framework. Since the relevance of contextual information varies with lemma part of speech, different configurations are required for entries describing nouns, adjectives, and verbs. These are illustrated by three entries from a Greek-English dictionary currently being written at Cambridge. In order to organise data to this level of specificity, stylistic templates are indispensable, and digital software provides a means of providing them. However, systems designed for writing new dictionaries require different features from those designed for encoding pre-existing texts. A description is given of how the lexicographic requirements of the Cambridge dictionary were met by a user-designed system
A comparison of collocations and word associations in Estonian from the perspective of parts of speech
The paper provides a comparative study of the collocational and associative structures in Estonian with respect to the role of parts of speech. The lists of collocations and associations of an equal set of nouns, verbs and adjectives, originating from the respective dictionaries, is analysed to find both the range of coincidences and differences. The results show a moderate overlap, among which the biggest overlap occurs in the range of the adjectival associates and collocates. There is an overall prevalence for nouns appearing among the associated and collocated items. The coincidental sets of relations are tentatively explained by the influence of grammatical relations i.e. the patterns of local grammar binding together the collocations and motivating the associations. The results are discussed with respect to the possible reasons causing the associations-collocations mismatch and in relation to the application of these findings in the fields of lexicography and second language acquisition
Enhancing Word Embeddings with Knowledge Extracted from Lexical Resources
In this work, we present an effective method for semantic specialization of
word vector representations. To this end, we use traditional word embeddings
and apply specialization methods to better capture semantic relations between
words. In our approach, we leverage external knowledge from rich lexical
resources such as BabelNet. We also show that our proposed post-specialization
method based on an adversarial neural network with the Wasserstein distance
allows to gain improvements over state-of-the-art methods on two tasks: word
similarity and dialog state tracking.Comment: Accepted to ACL 2020 SR
Distinguishing Antonyms and Synonyms in a Pattern-based Neural Network
Distinguishing between antonyms and synonyms is a key task to achieve high
performance in NLP systems. While they are notoriously difficult to distinguish
by distributional co-occurrence models, pattern-based methods have proven
effective to differentiate between the relations. In this paper, we present a
novel neural network model AntSynNET that exploits lexico-syntactic patterns
from syntactic parse trees. In addition to the lexical and syntactic
information, we successfully integrate the distance between the related words
along the syntactic path as a new pattern feature. The results from
classification experiments show that AntSynNET improves the performance over
prior pattern-based methods.Comment: EACL 2017, 10 page
Derivational morphology in the German mental lexicon: A dual mechanism account
The Dual Mechanism Model posits two different cognitive mechanisms for morphologically complex word forms: decomposition of regulars into stems and exponents, and full-form storage for irregulars. Most of the research in this framework has focused on contrasts between productive and non-productive inflection. In this paper, we extend the model to derivational morphology. Our studies indicate that productive derivation shows affinities with both productive and non-productive inflection. We argue that these results support the linguistic distinction between derivation and inflection, particularly as it is represented in realization-based models of morphology
Signals of contrastiveness: but, oppositeness and formal similarity in parallel contexts
By examining contexts in which ‘emergent’ oppositions appear, we consider the relative contribution of formal parallelism, connective type and semantic relation (considered as an indicator of relative semantic parallelism) in generating contrast. The data set is
composed of cases of ancillary antonymy – the use of an established antonym pair to help support and/or accentuate contrast between a less established pair. Having devised measures for formal and semantic parallelism, we find that but is less likely to appear in contexts with high levels of formal parallelism than non-contrastive connectives like and or punctuation. With respect to semantic parallelism, we find that contrastive connectives are less likely to occur with pairs that are in traditional paradigmatic relations (‘NYM relations’: antonymy, co-hyponymy, synonymy). The paper’s main hypothesis – that nonparadigmatic relations need more contextual sustenance for their opposition – was therefore supported. Indeed, pairs in NYM relations were found to be more than twice as likely to be joined by a non-contrastive connective as by a contrastive one
Paradigms regained
The volume discusses the breadth of applications for an extended notion of paradigm. Paradigms in this sense are not only tools of morphological description but constitute the inherent structure of grammar. Grammatical paradigms are structural sets forming holistic, semiotic structures with an informational value of their own. We argue that as such, paradigms are a part of speaker knowledge and provide necessary structuring for grammaticalization processes. The papers discuss theoretical as well as conceptual questions and explore different domains of grammatical phenomena, ranging from grammaticalization, morphology, and cognitive semantics to modality, aiming to illustrate what the concept of grammatical paradigms can and cannot (yet) explain
Theoretical and empirical arguments for the reassessment of the notion of paradigm
The volume discusses the breadth of applications for an extended notion of paradigm. Paradigms in this sense are not only tools of morphological description but constitute the inherent structure of grammar. Grammatical paradigms are structural sets forming holistic, semiotic structures with an informational value of their own. We argue that as such, paradigms are a part of speaker knowledge and provide necessary structuring for grammaticalization processes. The papers discuss theoretical as well as conceptual questions and explore different domains of grammatical phenomena, ranging from grammaticalization, morphology, and cognitive semantics to modality, aiming to illustrate what the concept of grammatical paradigms can and cannot (yet) explain
- …