15,642 research outputs found
Token-based typology and word order entropy: A study based on universal dependencies
The present paper discusses the benefits and challenges of token-based typology, which takes into account the frequencies of words and constructions in language use. This approach makes it possible to introduce new criteria for language classification, which would be difficult or impossible to achieve with the traditional, type-based approach. This point is illustrated by several quantitative studies of word order variation, which can be measured as entropy at different levels of granularity. I argue that this variation can be explained by general functional mechanisms and pressures, which manifest themselves in language use, such as optimization of processing (including avoidance of ambiguity) and grammaticalization of predictable units occurring in chunks. The case studies are based on multilingual corpora, which have been parsed using the Universal Dependencies annotation scheme
A Bi-Polar Theory of Nominal and Clause Structure and Function
It is taken as axiomatic that grammar encodes meaning. Two key dimensions of meaning that get grammatically encoded are referential meaning and relational meaning. The key claim is that, in English, these two dimensions of meaning are typically encoded in distinct grammatical polesâa referential pole and a relational poleâwith a specifier functioning as the locus of the referential pole and a head functioning as the locus of the relational pole. Specifiers and heads combine to form referring expressions corresponding to the syntactic notion of a maximal projection. Lexical items and expressions functioning as modifiers are preferentially attracted to one pole or the other. If the head of an expression describes a relation, one or more complements may be associated with the head. The four grammatical functions specifier, head, modifier and complement are generally adequate to represent much of the basic structure and function of nominals and clauses. These terms are borrowed from X-Bar Theory, but they are motivated on semantic grounds having to do with their grammatical function to encode referential and relational meaning
Against a Davidsonian analysis of copula sentences
Semantic research over the past three decades has provided impressive confirmation of Donald Davidsons famous claim that âthere is a lot of language we can make systematic sense of if we suppose events existâ (Davidson 1980:137). Nowadays, Davidsonian event arguments are no longer reserved only for action verbs (as Davidson originally proposed) or even only for the category of verbs, but instead are widely assumed to be associated with any kind of predicate (e.g. Higginbotham 2000, Parsons 2000).1 The following quotation from Higginbotham and Ramchand (1997) illustrates the reasoning that motivates this move: "Once we assume that predicates (or their verbal, etc. heads) have a position for events, taking the many consequences that stem therefrom, as outlined in publications originating with Donald Davidson (1967), and further applied in Higginbotham (1985, 1989), and Terence Parsons (1990), we are not in a position to deny an event-position to any predicate; for the evidence for, and applications of, the assumption are the same for all predicates. (Higginbotham and Ramchand 1997:54)" In fact, since Davidsonâs original proposal the burden of proof for postulating event arguments seems to have shifted completely, leading Raposo and Uriagereka (1995), for example, to the following verdict: "it is unclear what it means for a predicate not to have a Davidsonian argument (Raposo and Uriagereka 1995:182)" That is, Davidsonian eventuality arguments apparently have become something like a trademark for predicates in general. The goal of the present paper is to subject this view of the relationship between predicates and events to real scrutiny. By taking a closer look at the simplest independent predicational structure â viz. copula sentences â I will argue that current Davidsonian approaches tend to stretch the notion of events too far, thereby giving up much of its linguistic and ontological usefulness. More specifically, the paper will tackle the following three questions: 1. Do copula sentences support the current view of the inherent event-relatedness of predicates? 2. If not, what is a possible alternative to an event-based analysis of copula sentences? 3. What does this tell us about Davidsonian events? The paper is organized as follows: Section 2 first reviews current event-based analyses of copula sentences and then gives a brief summary of the Davidsonian notion of events. Section 3 examines the behavior of copula sentences with respect to some standard (as well as some new) eventuality diagnostics. Copula expressions will turn out to fail all eventuality tests. They differ sharply from state verbs like stand, sit, sleep in this respect. (The latter pass all eventuality tests and therefore qualify as true âDavidsonian stateâ expressions.) On the basis of these observations, section 4 provides an alternative account of copula sentences that combines Kimâs (1969, 1976) notion of property exemplifications with Ashers (1993, 2000) conception of abstract objects. Specifically, I will argue that the copula introduces a referential argument for a temporally bound property exemplification (= âKimian stateâ). The proposal is implemented within a DRT framework. Finally, section 5 offers some concluding remarks and suggests that supplementing Davidsonian eventualities by Kimian states not only yields a more adequate analysis for copula expressions and the like but may also improve our treatment of events
Sampling Assumptions Affect Use of Indirect Negative Evidence in Language Learning.
A classic debate in cognitive science revolves around understanding how children learn complex linguistic patterns, such as restrictions on verb alternations and contractions, without negative evidence. Recently, probabilistic models of language learning have been applied to this problem, framing it as a statistical inference from a random sample of sentences. These probabilistic models predict that learners should be sensitive to the way in which sentences are sampled. There are two main types of sampling assumptions that can operate in language learning: strong and weak sampling. Strong sampling, as assumed by probabilistic models, assumes the learning input is drawn from a distribution of grammatical samples from the underlying language and aims to learn this distribution. Thus, under strong sampling, the absence of a sentence construction from the input provides evidence that it has low or zero probability of grammaticality. Weak sampling does not make assumptions about the distribution from which the input is drawn, and thus the absence of a construction from the input as not used as evidence of its ungrammaticality. We demonstrate in a series of artificial language learning experiments that adults can produce behavior consistent with both sets of sampling assumptions, depending on how the learning problem is presented. These results suggest that people use information about the way in which linguistic input is sampled to guide their learning
On the limits of the Davidsonian approach : the case of copula sentences
Since Donald Davidsonâs seminal work âThe Logical Form of Action Sentencesâ (1967) event arguments have become an integral component of virtually every semantic theory. Over the past years DavidsonÂŽs proposal has been continuously extended such that nowadays event(uality) arguments are generally associated not only with action verbs but with predicates of all sorts. The reasons for such an extension are seldom explicitly justified. Most problematical in this respect is the case of stative expressions. By taking a closer look at copula sentences the present study assesses the legitimacy of stretching the Davidsonian notion of events and discusses its consequences. A careful application of some standard eventuality diagnostics (perception reports, combination with locative modifiers and manner adverbials) as well as some new diagnostics (behavior of certain degree adverbials) reveals that copular expressions do not behave as expected under a Davidsonian perspective: they fail all eventuality tests, regardless of whether they represent stage-level or individual-level predicates. In this respect, copular expressions pattern with stative verbs like know, hate, and resemble, which in turn differ sharply from state verbs like stand, sit, and sleep. The latter pass all of the eventuality tests and therefore qualify as true âDavidsonian stateâ expressions. On the basis of these empirical observations and taking up ideas of Kim (1969, 1976) and Asher (1993, 2000), an alternative account of copular expressions (and stative verbs) is provided, according to which the copula introduces a referential argument for a temporally bound property exemplification (= âKimian stateâ). Considerations on some logical properties, viz. closure conditions and the latent infinite regress of eventualities, suggest that supplementing Davidsonian eventualities with Kimian states may yield not only a more adequate analysis of copula sentences but also a better understanding of eventualities in general
Concurrent Lexicalized Dependency Parsing: A Behavioral View on ParseTalk Events
The behavioral specification of an object-oriented grammar model is
considered. The model is based on full lexicalization, head-orientation via
valency constraints and dependency relations, inheritance as a means for
non-redundant lexicon specification, and concurrency of computation. The
computation model relies upon the actor paradigm, with concurrency entering
through asynchronous message passing between actors. In particular, we here
elaborate on principles of how the global behavior of a lexically distributed
grammar and its corresponding parser can be specified in terms of event type
networks and event networks, resp.Comment: 68kB, 5pages Postscrip
Event-internal modifiers : semantic underspecification and conceptual interpretation
The article offers evidence that there are two variants of adverbial modification that differ with respect to the way in which a modifier is linked to the verbs eventuality argument. So-called event-external modifiers relate to the full eventuality, whereas event-internal modifiers relate to some integral part of it. The choice between external and internal modification is shown to be dependent on the modifiers syntactic base position. Event-external modifiers are base-generated at the VP periphery, whereas event-internal modifiers are base-generated at the V periphery. These observations are accounted for by a refined version of the standard Davidsonian approach to adverbial modification according to which modification is mediated by a free variable. In the case of external modification, the grammar takes responsibility for identifying the free variable with the verbs eventuality argument, whereas in the case of internal modification, a value for the free variable is determined by the conceptual system on the basis of contextually salient world knowledge. For the intriguing problem that certain locative modifiers occasionally seem to have nonlocative (instrumental, positional, or manner) readings, the advocated approach can provide a rather simple solution
- âŠ