3,079 research outputs found
Morphological word structure in English and Swedish : the evidence from prosody
Trubetzkoy's recognition of a delimitative function of phonology, serving to signal boundaries between morphological units, is expressed in terms of alignment constraints in Optimality Theory, where the relevant constraints require specific morphological boundaries to coincide with phonological structure (Trubetzkoy 1936, 1939, McCarthy & Prince 1993). The approach pursued in the present article is to investigate the distribution of phonological boundary signals to gain insight into the criteria underlying morphological analysis. The evidence from English and Swedish suggests that necessary and sufficient conditions for word-internal morphological analysis concern the recognizability of head constituents, which include the rightmost members of compounds and head affixes. The claim is that the stability of word-internal boundary effects in historical perspective cannot in general be sufficiently explained in terms of memorization and imitation of phonological word form. Rather, these effects indicate a morphological parsing mechanism based on the recognition of word-internal head constituents. Head affixes can be shown to contrast systematically with modifying affixes with respect to syntactic function, semantic content, and prosodic properties. That is, head affixes, which cannot be omitted, often lack inherent meaning and have relatively unmarked boundaries, which can be obscured entirely under specific phonological conditions. By contrast, modifying affixes, which can be omitted, consistently have inherent meaning and have stronger boundaries, which resist prosodic fusion in all phonological contexts. While these correlations are hardly specific to English and Swedish it remains to be investigated to which extent they hold cross-linguistically. The observation that some of the constituents identified on the basis of prosodic evidence lack inherent meaning raises the issue of compositionality. I will argue that certain systematic aspects of word meaning cannot be captured with reference to the syntagmatic level, but require reference to the paradigmatic level instead. The assumption is then that there are two dimensions of morphological analysis: syntagmatic analysis, which centers on the criteria for decomposing words in terms of labelled constituents, and paradigmatic analysis, which centers on the criteria for establishing relations among (whole) words in the mental lexicon. While meaning is intrinsically connected with paradigmatic analysis (e.g. base relations, oppositeness) it is not essential to syntagmatic analysis
Taking antonymy mask off in vector space
Automatic detection of antonymy is an important task in Natural Language Processing (NLP) for Information Retrieval (IR), Ontology Learning (OL) and many other semantic applications. However, current unsupervised approaches to antonymy detection are still not fully effective because they cannot discriminate antonyms from synonyms. In this paper, we introduce APAnt, a new Average-Precision-based measure for the unsupervised discrimination of antonymy from synonymy using Distributional Semantic Models (DSMs). APAnt makes use of Average Precision to estimate the extent and salience of the intersection among the most descriptive contexts of two target words. Evaluation shows that the proposed method is able to distinguish antonyms and synonyms with high accuracy across different parts of speech, including nouns, adjectives and verbs. APAnt outperforms the vector cosine and a baseline model implementing the co-occurrence hypothesis
Derivational morphology in the German mental lexicon: A dual mechanism account
The Dual Mechanism Model posits two different cognitive mechanisms for morphologically complex word forms: decomposition of regulars into stems and exponents, and full-form storage for irregulars. Most of the research in this framework has focused on contrasts between productive and non-productive inflection. In this paper, we extend the model to derivational morphology. Our studies indicate that productive derivation shows affinities with both productive and non-productive inflection. We argue that these results support the linguistic distinction between derivation and inflection, particularly as it is represented in realization-based models of morphology
Distinguishing Antonyms and Synonyms in a Pattern-based Neural Network
Distinguishing between antonyms and synonyms is a key task to achieve high
performance in NLP systems. While they are notoriously difficult to distinguish
by distributional co-occurrence models, pattern-based methods have proven
effective to differentiate between the relations. In this paper, we present a
novel neural network model AntSynNET that exploits lexico-syntactic patterns
from syntactic parse trees. In addition to the lexical and syntactic
information, we successfully integrate the distance between the related words
along the syntactic path as a new pattern feature. The results from
classification experiments show that AntSynNET improves the performance over
prior pattern-based methods.Comment: EACL 2017, 10 page
Compounds and multi-word expressions in Dutch
Part of book or chapter of boo
- âŠ