4 research outputs found

    A dynamic network analysis of emergent grammar

    Get PDF
    For languages to survive as complex cultural systems, they need to be learnable. According to traditional approaches, learning is made possible by constraining the degrees of freedom in advance of experience and by the construction of complex structure during development. This article explores a third contributor to complexity: namely, the extent to which syntactic structure can be an emergent property of how simpler entities – words – interact with one another. The authors found that when naturalistic child directed speech was instantiated in a dynamic network, communities formed around words that were more densely connected with other words than they were with the rest of the network. This process is designed to mirror what we know about distributional patterns in natural language: namely, the network communities represented the syntactic hubs of semi-formulaic slot-and-frame patterns, characteristic of early speech. The network itself was blind to grammatical information and its organization reflected (a) the frequency of using a word and (b) the probabilities of transitioning from one word to another. The authors show that grammatical patterns in the input disassociate by community structure in the emergent network. These communities provide coherent hubs which could be a reliable source of syntactic information for the learner. These initial findings are presented here as proof-of-concept in the hope that other researchers will explore the possibilities and limitations of this approach on a larger scale and with more languages. The implications of a dynamic network approach are discussed for the learnability burden and the development of an adult-like grammar

    Early word learning through communicative inference

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Brain and Cognitive Sciences, 2010.Cataloged from PDF version of thesis.Includes bibliographical references (p. 109-122).How do children learn their first words? Do they do it by gradually accumulating information about the co-occurrence of words and their referents over time, or are words learned via quick social inferences linking what speakers are looking at, pointing to, and talking about? Both of these conceptions of early word learning are supported by empirical data. This thesis presents a computational and theoretical framework for unifying these two different ideas by suggesting that early word learning can best be described as a process of joint inferences about speakers' referential intentions and the meanings of words. Chapter 1 describes previous empirical and computational research on "statistical learning"--the ability of learners to use distributional patterns in their language input to learn about the elements and structure of language-and argues that capturing this abifity requires models of learning that describe inferences over structured representations, not just simple statistics. Chapter 2 argues that social signals of speakers' intentions, even eye-gaze and pointing, are at best noisy markers of reference and that in order to take advantage of these signals fully, learners must integrate information across time. Chapter 3 describes the kinds of inferences that learners can make by assuming that speakers are informative with respect to their intended meaning, introducing and testing a formalization of how Grice's pragmatic maxims can be used for word learning. Chapter 4 presents a model of cross-situational intentional word learning that both learns words and infers speakers' referential intentions from labeled corpus data.by Michael C. Frank.Ph.D
    corecore