7 research outputs found
Complementing quantitative typology with behavioral approaches: Evidence for typological universals
Two main classes of theory have been advanced to explain correlations between linguistic features like those observed by Greenberg (1963). arbitrary constraint theories argue that certain sets of features patterm together because they have a single underlying cause in the innate language faculty (e.g., the Principles and Parameters program; see Chomsky & Lasnik 1993). functional theories argue that languages are less likely to have certain combinations of properties because, although possible in principle, they are harder to learn or to process, or less suitable for efficient communication (Hockett 1960, Bates & MacWhinney 1989, Hawkins 2004, Dryer 2007, Christiansen & Chater 2008; for further discussion see Hawkins 2007 and Jaeger & Tily 2011). The failure of Dunn, Greenhill, Levinson & Gray (2011) to find systematic feature correlations using their novel computational phylogenetic methods calls into question both of these classes of theory.Alfred P. Sloan Foundation. Fellowshi
Random effects structure in mixed-effects models: Keep it maximal
Abstract Linear mixed effects models (LMEMs) are rapidly advancing as a candidate to replace ANOVA as a standard for inferential analyses in psycholinguistics and associated fields. However, because of the relative novelty of this approach, there are few clear standards regarding its correct use, as well as much uncertainty about whether it truly offers an advantage over traditional approaches. In this paper, we argue that many of the traditional standards in accounting for observational dependencies in the design also apply to the correct use of LMEMs. We argue that valid statistical inferences using LMEMs require maximal random-effects structures wherever possible-that is, including condition-specific random effects by subjects/items for every fixed effect of theoretical interest that is measured in mor
The Communicative Lexicon Hypothesis
Recent work suggests that variation in online language production reflects the fact that speech is information-theoretically efficient for communication. We apply this idea to studying the offline, structural properties of language, asking whether lexical properties may similarly reflect communicative pressures. We present evidence for the Communicative Lexicon Hypothesis (CLH): human lexical systems are efficient solutions to the problem of communication for the human language processor. While the relationship between sounds and meanings may be arbitrary, pressure for concise and error-correcting communication — within the constraints imposed by human articulatory, perceptual and cognitive abilities — has influenced which sets of phonological forms have emerged in the lexicons of human languages. We present two tests of the CLH: first, we show that word lengths are better predicted by a word’s average predictability in context than its overall frequency. Second, we show that salient (lexically stressed) parts of words are more informative about a word’s identity, in English, German, Dutch, Hawai’ian, and Spanish
The learnability of constructed languages reflects typological patterns
A small number of the logically possible word order configurations account for a large proportion of actual human languages. To explain this distribution, typologists often invoke principles of human cognition which might make certain orders easier or harder to learn or use. We present a novel method for carrying out very large scale artificial language learning tasks over the internet, which allows us to test large batteries of systematically designed languages for differential learnability. An exploratory study of the learnability of all possible configurations of subject, verb, and object finds that the two most frequent orders in human languages are the most easily learned, and yields suggestive evidence compatible with other typological and psycholinguistic observations
The Natural Stories corpus: a reading-time corpus of English texts containing rare syntactic constructions
It is now a common practice to compare models of human language processing by comparing how well they predict behavioral and neural measures of processing difficulty, such as reading times, on corpora of rich naturalistic linguistic materials. However, many of these corpora, which are based on naturally-occurring text, do not contain many of the low-frequency syntactic constructions that are often required to distinguish between processing theories. Here we describe a new corpus consisting of English texts edited to contain many low-frequency syntactic constructions while still sounding fluent to native speakers. The corpus is annotated with hand-corrected Penn Treebank-style parse trees and includes self-paced reading time data and aligned audio recordings. We give an overview of the content of the corpus, review recent work using the corpus, and release the data.National Science Foundation (Grants 0844472 and 1534318