75 research outputs found

    Bilingual and Monolingual Children Attend to Different Cues When Learning New Words

    Get PDF
    The way in which children learn language can vary depending on their language environment. Previous work suggests that bilingual children may be more sensitive to pragmatic cues from a speaker when learning new words than monolingual children are. On the other hand, monolingual children may rely more heavily on object properties than bilingual children do. In this study we manipulate these two sources of information within the same paradigm, using eye gaze as a pragmatic cue and similarity along different dimensions as an object cue. In the crucial condition, object and pragmatic cues were inconsistent with each other. Our results showed that in this ambiguous condition monolingual children attend more to object property cues whereas bilingual children attend more to pragmatic cues. Control conditions showed that monolingual children were sensitive to eye gaze and bilingual children were sensitive to similarity by shape; it was only when the cues were inconsistent that children’s preference for one or the other cue was apparent. Our results suggest that children learn to weigh different cues depending on their relative informativeness in their environment

    On the Automatic Generation and Simplification of Children's Stories

    Full text link
    With recent advances in large language models (LLMs), the concept of automatically generating children's educational materials has become increasingly realistic. Working toward the goal of age-appropriate simplicity in generated educational texts, we first examine the ability of several popular LLMs to generate stories with properly adjusted lexical and readability levels. We find that, in spite of the growing capabilities of LLMs, they do not yet possess the ability to limit their vocabulary to levels appropriate for younger age groups. As a second experiment, we explore the ability of state-of-the-art lexical simplification models to generalize to the domain of children's stories and, thus, create an efficient pipeline for their automatic generation. In order to test these models, we develop a dataset of child-directed lexical simplification instances, with examples taken from the LLM-generated stories in our first experiment. We find that, while the strongest-performing current lexical simplification models do not perform as well on material designed for children due to their reliance on large language models behind the scenes, some models that still achieve fairly strong results on general data can mimic or even improve their performance on children-directed data with proper fine-tuning, which we conduct using our newly created child-directed simplification dataset.Comment: Accepted to EMNLP 2023 (main conference
    • 

    corecore