5 research outputs found

    High-level methodologies for grammar engineering, introduction to the special issue

    Full text link

    Extensible Dependency Grammar: a modular grammar formalism based on multigraph description

    Get PDF
    This thesis develops Extensible Dependency Grammar (XDG), a new grammar formalism combining dependency grammar, model-theoretic syntax, and Jackendoff\u27;s parallel grammar architecture. The design of XDG is strongly geared towards modularity: grammars can be modularly extended by any linguistic aspect such as grammatical functions, word order, predicate-argument structure, scope, information structure and prosody, where each aspect is modeled largely independently on a separate dimension. The intersective demands of the dimensions make many complex linguistic phenomena such as extraction in syntax, scope ambiguities in the semantics, and control and raising in the syntax-semantics interface simply fall out as by-products without further stipulation. This thesis makes three main contributions: 1. The first formalization of XDG as a multigraph description language in higher order logic, and investigations of its expressivity and computational complexity. 2. The first implementation of XDG, the XDG Development Kit (XDK), an extensive grammar development environment built around a constraint parser for XDG. 3. The first application of XDG to natural language, modularly modeling a fragment of English

    The semantics of English -ment nominalizations

    Get PDF
    Synopsis: It is well-known that derivational affixes can be highly polysemous, producing a range of different, often related, meanings. For example, English deverbal nouns with the suffix -er can denote instruments (opener), agents (writer), locations (diner), or patients (loaner). It is commonly assumed that this polysemy arises through a compositional process in which the affix interacts with the semantics of the base. Yet, despite intensive research in recent years, a workable model for this interaction is still under debate. In order to study and model the semantic contributions of the base and of the affix, a framework is needed in which meanings can be composed and decomposed. In this book, I formalize the semantic input and output of derivation by means of frames, that is, recursive attribute-value structures that serve to model mental representations of concepts. In my approach, the input frame offers an array of semantic elements from which an affix may select to construct the derivative's meaning. The relationship between base and derivative is made explicit by integrating their respective frame-semantic representations into lexical rules and inheritance hierarchies. I apply this approach to a qualitative corpus study of the productive relationship between the English nominalizing suffix -ment and a semantically delimited set of verbal bases. My data set consists of 40 neologisms with base verbs from two semantic classes, namely change-of-state verbs and verbs of psychological state. I analyze 369 attestations which were elicited from various corpora with a purposeful sampling approach, and which were hand-coded using common semantic categories such as event, state, patient and stimulus. My results show that -ment can target a systematically restricted set of elements in the frame of a given base verb. It thereby produces a range of possible readings in each derivative, which becomes ultimately interpretable only within a specific context. The derivational process is governed by an interaction of the semantic elements provided by the base on the one hand, with properties of the affix (e.g. -ment's aversion to [+animate] readings) on the other. For instance, a shift from the verb annoy to a result-state reading in annoyment is possible because the input frame of verbs of psychological state offers a RESULT-STATE attribute, which, as is fixed in the inheritance hierarchy, is compatible with -ment. Meanwhile, a shift from annoy to an experiencer reading in annoyment fails because the value range of the attribute EXPERIENER is fixed to [+animate] entities, so that -ment's animacy constraint blocks the inheritance mechanism. Furthermore, a quantitative exploration of my data set reveals a likely blocking effect for some -ment readings. Thus, while I have found most expected combinations of nominalization and reading attested, there are pronounced gaps for readings like instrument or stimulus. Such readings are likely to be produced by standardly subject-denoting suffixes such as -er or -ant, which may reduce the probability for -ment derivation. The quantitative analysis furthermore shows that, within the subset of attested combinations, ambiguity is widespread, with 43% of all combinations of nominalization and reading being only attested ambiguously. This book shows how a derivational process acts on the semantics of a given verbal base by reporting on an in-depth qualitative study of the semantic contributions of both the base and the affix. Furthermore, it demonstrates that an explicit semantic decomposition of the base is essential for the analysis of the resulting derivative's semantics

    Handbook of Lexical Functional Grammar

    Get PDF
    Lexical Functional Grammar (LFG) is a nontransformational theory of linguistic structure, first developed in the 1970s by Joan Bresnan and Ronald M. Kaplan, which assumes that language is best described and modeled by parallel structures representing different facets of linguistic organization and information, related by means of functional correspondences. This volume has five parts. Part I, Overview and Introduction, provides an introduction to core syntactic concepts and representations. Part II, Grammatical Phenomena, reviews LFG work on a range of grammatical phenomena or constructions. Part III, Grammatical modules and interfaces, provides an overview of LFG work on semantics, argument structure, prosody, information structure, and morphology. Part IV, Linguistic disciplines, reviews LFG work in the disciplines of historical linguistics, learnability, psycholinguistics, and second language learning. Part V, Formal and computational issues and applications, provides an overview of computational and formal properties of the theory, implementations, and computational work on parsing, translation, grammar induction, and treebanks. Part VI, Language families and regions, reviews LFG work on languages spoken in particular geographical areas or in particular language families. The final section, Comparing LFG with other linguistic theories, discusses LFG work in relation to other theoretical approaches
    corecore