45 research outputs found

    No frills : Simple regularities in language can go a long way in the development of word knowledge

    Get PDF
    Recent years have seen a flourishing of Natural Language Processing models that can mimic many aspects of human language fluency. These models harness a simple, decades-old idea: It is possible to learn a lot about word meanings just from exposure to language, because words similar in meaning are used in language in similar ways. The successes of these models raise the intriguing possibility that exposure to word use in language also shapes the word knowledge that children amass during development. However, this possibility is strongly challenged by the fact that models use language input and learning mechanisms that may be unavailable to children. Across three studies, we found that unrealistically complex input and learning mechanisms are unnecessary. Instead, simple regularities of word use in children's language input that they have the capacity to learn can foster knowledge about word meanings. Thus, exposure to language may play a simple but powerful role in children's growing word knowledge. A video abstract of this article can be viewed at https://youtu.be/dT83dmMffnM. RESEARCH HIGHLIGHTS: Natural Language Processing (NLP) models can learn that words are similar in meaning from higher-order statistical regularities of word use. Unlike NLP models, infants and children may primarily learn only simple co-occurrences between words. We show that infants' and children's language input is rich in simple co-occurrence that can support learning similarities in meaning between words. We find that simple co-occurrences can explain infants' and children's knowledge that words are similar in meaning

    Sources of Interference in Memory across Development

    No full text
    corecore