13 research outputs found

    Dynamic verbs in the Wordnet of Polish

    Get PDF
    Dynamic verbs in the Wordnet of Polish The paper presents patterns of co-occurrences of wordnet relations involving verb lexical units in plWordNet - a large wordnet of Polish. The discovered patterns reveal tendencies of selected synset and lexical relations to form regular circular structures of clear semantic meanings. They involve several types of relations, e.g., presupposition, cause, processuality and antonymy, do not have a necessary character (there are exceptions), but can be used in wordnet diagnostics and guidelines for wordnet editors. The analysis is illustrated with numerous positive and negative examples, as well as statistics for verb relations in plWordNet 4.0 emo. Some attempts to a more general, linguistic explanation of the observed phenomena are also made. As a background, plWordNet model of linguistic character is briefly recollected. A special attention is given to the verb part. In addition the description of dynamic verbs by relations and features is discussed in details including relation definitions and substitution tests.   Czasowniki dynamiczne w Słowosieci - wordnecie języka polskiego W artykule zostały przedstawione wzorce współwystępowania relacji leksykalno-semantycznych obejmujących czasownikowe jednostki leksykalne w ramach Słowosieci - wielkiego relacyjnego słownika języka polskiego, wordnetu języka polskiego. Tłem obserwacji jest Słowosieć 4.0 emo, dla której omówiono skrótowo system relacji czasownikowych wraz ze statystykami. Szczególną uwagę autorzy poświęcili czasownikom dynamicznym i ich typowym relacjom, dla których przedstawiono testy substytucji z wytycznych do relacyjnego opisu czasownika, zdefiniowanych na potrzeby edycji Słowosieci przez lingwistów. Opisane w artykule wzorce współwystępowania ukazują tendencje niektórych relacji synsetów (tj. zbiorów synonimów) i jednostek leksykalnych (m.in. presupozycji, kauzacji, procesywności i antonimii) do tworzenia regularnych struktur, specyfikujących znaczenie wszystkich jednostek/synsetów, połączonych za pomocą danych relacji. Współwystępowania relacji wg wzorców nie mają charakteru obligatoryjnego, dlatego też w artykule przedstawiono zarówno pozytywne, jak i negatywne przykłady jednostek i synsetów, połączonych ze sobą za pomocą relacji współwystępujących, jak i pewne uwagi natury ogólnej, wskazujące na językowy charakter obserwowanego zjawiska. Oprócz znaczenia poznawczego, związanego ze współzależnościami, jakie zachodzą w obrębie systemu językowego, opis tych regularności ma również znaczenie praktyczne - może być wykorzystany przy diagnostyce wordnetu oraz w wytycznych dla lingwistów

    Towards Semantic Validation of a Derivational Lexicon

    Get PDF
    Abstract Derivationally related lemmas like friend N -friendly A -friendship N are derived from a common stem. Frequently, their meanings are also systematically related. However, there are also many examples of derivationally related lemma pairs whose meanings differ substantially, e.g., object N -objective N . Most broad-coverage derivational lexicons do not reflect this distinction, mixing up semantically related and unrelated word pairs. In this paper, we investigate strategies to recover the above distinction by recognizing semantically related lemma pairs, a process we call semantic validation. We make two main contributions: First, we perform a detailed data analysis on the basis of a large German derivational lexicon. It reveals two promising sources of information (distributional semantics and structural information about derivational rules), but also systematic problems with these sources. Second, we develop a classification model for the task that reflects the noisy nature of the data. It achieves an improvement of 13.6% in precision and 5.8% in F1-score over a strong majority class baseline. Our experiments confirm that both information sources contribute to semantic validation, and that they are complementary enough that the best results are obtained from a combined model

    The -ing suffix in French

    Get PDF
    One striking characteristic of modern French is the increasingly large number of words that contain the English -ing suffix. This phenomenon stands in contrast to the stereotype of the French being purists with regards to language choice and use. Indeed, there is a variety of evidence that this suffix has been integrated into French as a productive derivational suffix, and does not simply occur as an accident resulting from the borrowing of English words that happen to include it. Though many studies have been carried out on loanwords in French, and certain ones have brought specific attention to the importation of -ing into French, none as of yet, have solely focused on the -ing suffix. This paper considers four major ways in which the suffix has been integrated into French grammatical structure: phonological, morphological, syntactic, and semantic. It is based on a corpus of approximately 730 French words containing -ing, of which a subset of individual words were studied intensively in their use on the internet. Words containing -ing are categorized in relation to a typology, which marks a distinction between loanwords and native creations. This distinction highlights the use of -ing words in French as instances of a very productive process of borrowing from English, heavily integrated into French in all of the four areas mentioned above. In addition, the suffix appears to be acquiring the status of an independent morpheme, with both a derivational use as a nominalizer and an inflectional use to create participles. As a side-effect, the velar nasal [ŋ] has entered the inventory of French phonemes. The suffix\u27s infiltration into French grammar is not uniform. There is, for example, a tendency toward greater use in connection with modern trends and hip culture as well as in certain functions within the clause. This uneven penetration sheds light on patterns of language change and will be useful in the future in documenting a snapshot of current usage as the suffix continues to make its way further into the language

    Induction, Semantic Validation and Evaluation of a Derivational Morphology Lexicon for German

    Get PDF
    This thesis is about computational morphology for German derivation. Derivation is a word formation process that creates new words from existing ones, where the base and the derived word share the same stem. Mostly, derivation is conducted by means of relatively regular affixation rules, as in to bake - bakery. In German, derivation is highly productive, thus leading to a high language variability which can be employed to express similar facts in different ways, as derivationally related words are often also semantically related (or transparent). However, linguistic variance is a challenge for computational applications, particularly in semantic processing: It makes it more difficult to automatically grasp the meaning of texts and to match similar information onto each other. Thus, computational systems require linguistic knowledge. We develop methods to induce and represent derivational knowledge, and to apply it in language processing. The main outcome of our study is DErivBase, a German derivational lexicon. It groups derivationally related words (words that are derived from the same stem) into derivational families. To achieve high quality and high coverage, we induce DErivBase by combining rule-based and data-driven methods: We implement linguistic derivation rules to define derivational processes, and feed lemmas extracted from a German corpus into the rules to derive new lemmas. All words that are connected - directly or indirectly - by such rules are considered a derivational family. As mentioned above, a derivational relationship often implies semantic relationship, but this is not always the case. Semantic drifts can cause semantically unrelated (opaque) derivational relations, such as to depart - department. Capturing the difference between transparent and opaque relations is important from a linguistic as well as a practical point of view. Thus, we conduct a semantic refinement of DErivBase, i.e., we determine which lemma pairs are derivationally and semantically related, and which are not. We establish a second, semantically validated version of our lexicon, where families are sub-clustered according to semantic coherence, using supervised machine learning methods: We learn a binary classifier based on features that arise from structural information about the derivation rules, and from distributional information about the semantic relatedness of lemmas. Accordingly, the derivational families are subdivided into semantically coherent clusters. To demonstrate the utility of the two lexicon versions, we evaluate them on three extrinsic - and in the broadest sense, semantic - tasks. The underlying assumption for applying DErivBase to semantic tasks is that derivational relatedness is a reasonable approximation to semantic relatedness, since derivation is often semantically transparent. Our three experiments are the following: 1., we incorporate DErivBase into distributional semantic models to overcome sparsity problems and to improve the prediction quality of the underlying model. We test this method, which we call derivational smoothing, for semantic similarity prediction, and for synonym choice. 2., we employ DErivBase to model a psycholinguistic experiment that examines priming effects of transparent and opaque derivations to draw conclusions about the mental lexical representation in German. Derivational information is again incorporated into a distributional model, but this time, it introduces a kind of morphological generalisation. 3., in order to solve the task of Recognising Textual Entailment, we integrate DErivBase into a matching-based entailment system by means of a query expansion. Assuming that derivational relationships between two texts suggest them to be entailing rather than non-entailing, this expansion increases the chance of a lexical overlap, which should improve the system's entailment predictions. The incorporation of DErivBase indeed improves the performance of the underlying systems in each task, however, it is differently suitable in different settings. In experiment 1., the semantically validated lexicon yields improvements over the purely morphological lexicon, and the more coarse-grained similarity prediction profits more from DErivBase than the synonym choice. In experiment 2., purely morphological information clearly outperforms the other lexicon version, as the latter cannot model opaque derivations. On the entailment task in experiment 3., DErivBase has only minor impact, because textual entailment is hard to solve by addressing only one linguistic phenomenon. In sum, our findings show that the induction of a high-quality, high-coverage derivational lexicon is beneficial for very different applications in computational linguistics. It might be worthwhile to further investigate the semantic aspects of derivation to better understand its impact on language and thus, on language processing

    Word Knowledge and Word Usage

    Get PDF
    Word storage and processing define a multi-factorial domain of scientific inquiry whose thorough investigation goes well beyond the boundaries of traditional disciplinary taxonomies, to require synergic integration of a wide range of methods, techniques and empirical and experimental findings. The present book intends to approach a few central issues concerning the organization, structure and functioning of the Mental Lexicon, by asking domain experts to look at common, central topics from complementary standpoints, and discuss the advantages of developing converging perspectives. The book will explore the connections between computational and algorithmic models of the mental lexicon, word frequency distributions and information theoretical measures of word families, statistical correlations across psycho-linguistic and cognitive evidence, principles of machine learning and integrative brain models of word storage and processing. Main goal of the book will be to map out the landscape of future research in this area, to foster the development of interdisciplinary curricula and help single-domain specialists understand and address issues and questions as they are raised in other disciplines

    First International Workshop on Lexical Resources

    Get PDF
    International audienceLexical resources are one of the main sources of linguistic information for research and applications in Natural Language Processing and related fields. In recent years advances have been achieved in both symbolic aspects of lexical resource development (lexical formalisms, rule-based tools) and statistical techniques for the acquisition and enrichment of lexical resources, both monolingual and multilingual. The latter have allowed for faster development of large-scale morphological, syntactic and/or semantic resources, for widely-used as well as resource-scarce languages. Moreover, the notion of dynamic lexicon is used increasingly for taking into account the fact that the lexicon undergoes a permanent evolution.This workshop aims at sketching a large picture of the state of the art in the domain of lexical resource modeling and development. It is also dedicated to research on the application of lexical resources for improving corpus-based studies and language processing tools, both in NLP and in other language-related fields, such as linguistics, translation studies, and didactics

    Investigating the universality of a semantic web-upper ontology in the context of the African languages

    Get PDF
    Ontologies are foundational to, and upper ontologies provide semantic integration across, the Semantic Web. Multilingualism has been shown to be a key challenge to the development of the Semantic Web, and is a particular challenge to the universality requirement of upper ontologies. Universality implies a qualitative mapping from lexical ontologies, like WordNet, to an upper ontology, such as SUMO. Are a given natural language family's core concepts currently included in an existing, accepted upper ontology? Does SUMO preserve an ontological non-bias with respect to the multilingual challenge, particularly in the context of the African languages? The approach to developing WordNets mapped to shared core concepts in the non-Indo-European language families has highlighted these challenges and this is examined in a unique new context: the Southern African languages. This is achieved through a new mapping from African language core concepts to SUMO. It is shown that SUMO has no signi ficant natural language ontology bias.ComputingM. Sc. (Computer Science

    Handbook of Lexical Functional Grammar

    Get PDF
    Lexical Functional Grammar (LFG) is a nontransformational theory of linguistic structure, first developed in the 1970s by Joan Bresnan and Ronald M. Kaplan, which assumes that language is best described and modeled by parallel structures representing different facets of linguistic organization and information, related by means of functional correspondences. This volume has five parts. Part I, Overview and Introduction, provides an introduction to core syntactic concepts and representations. Part II, Grammatical Phenomena, reviews LFG work on a range of grammatical phenomena or constructions. Part III, Grammatical modules and interfaces, provides an overview of LFG work on semantics, argument structure, prosody, information structure, and morphology. Part IV, Linguistic disciplines, reviews LFG work in the disciplines of historical linguistics, learnability, psycholinguistics, and second language learning. Part V, Formal and computational issues and applications, provides an overview of computational and formal properties of the theory, implementations, and computational work on parsing, translation, grammar induction, and treebanks. Part VI, Language families and regions, reviews LFG work on languages spoken in particular geographical areas or in particular language families. The final section, Comparing LFG with other linguistic theories, discusses LFG work in relation to other theoretical approaches
    corecore