2 research outputs found

    Onomatopoeia, gestures, actions and words: how do caregivers use multimodal cues in their communication to children?

    Get PDF
    Most research on how children learn the mapping between words and world has assumed that language is arbitrary, and has investigated language learning in contexts in which objects referred to are present in the environment. Here, we report analyses of a semi-naturalistic corpus of caregivers talking to their 2-3 year-old. We focus on caregivers’ use of non-arbitrary cues across different expressive channels: both iconic (onomatopoeia and representational gestures) and indexical (points and actions with objects). We ask if these cues are used differently when talking about objects known or unknown to the child, and when the referred objects are present or absent. We hypothesize that caregivers would use these cues more often with objects novel to the child. Moreover, they would use the iconic cues especially when objects are absent because iconic cues bring to the mind’s eye properties of referents. We find that cue distribution differs: all cues except points are more common for unknown objects indicating their potential role in learning; onomatopoeia and representational gestures are more common for displaced contexts whereas indexical cues are more common when objects are present. Thus, caregivers provide multimodal non-arbitrary cues to support children’s vocabulary learning and iconicity – specifically – can support linking mental representations for objects and labels
    corecore