246 research outputs found

    Infants’ intentionally communicative vocalisations elicit responses from caregivers and are the best predictors of the transition to language: a longitudinal investigation of infants’ vocalisations, gestures, and word production

    Get PDF
    What aspects of infants’ prelinguistic communication are most valuable for learning to speak, and why? We test whether early vocalisations and gestures drive the transition to word use because, in addition to indicating motoric readiness, they 1) are early instances of intentional communication and 2) elicit verbal responses from caregivers. In study 1, 11-month-olds (N = 134) were observed to coordinate vocalisations and gestures with gaze to their caregiver’s face at above chance rates, indicating that they are plausibly intentionally communicative. Study 2 tested whether those infant communicative acts that were gaze-coordinated best predicted later expressive vocabulary. We report a novel procedure for predicting vocabulary via multi-model inference over a comprehensive set of infant behaviours produced at 11- and 12-months (n = 58). This makes it possible to establish the relative predictive value of different behaviours that are hierarchically organised by level of granularity. Gaze-coordinated vocalisations were the most valuable predictors of expressive vocabulary size up to 24 months. Study 3 established that caregivers were more likely to respond to gaze-coordinated behaviours. Moreover, the dyadic combination of infant gaze-coordinated vocalisation and caregiver response was by far the best predictor of later vocabulary size. We conclude that practice with prelinguistic intentional communication facilitates the leap to symbol use. Learning is optimised when caregivers respond to intentional vocalisations with appropriate language

    Interactive Language Learning by Robots: The Transition from Babbling to Word Forms

    Get PDF
    The advent of humanoid robots has enabled a new approach to investigating the acquisition of language, and we report on the development of robots able to acquire rudimentary linguistic skills. Our work focuses on early stages analogous to some characteristics of a human child of about 6 to 14 months, the transition from babbling to first word forms. We investigate one mechanism among many that may contribute to this process, a key factor being the sensitivity of learners to the statistical distribution of linguistic elements. As well as being necessary for learning word meanings, the acquisition of anchor word forms facilitates the segmentation of an acoustic stream through other mechanisms. In our experiments some salient one-syllable word forms are learnt by a humanoid robot in real-time interactions with naive participants. Words emerge from random syllabic babble through a learning process based on a dialogue between the robot and the human participant, whose speech is perceived by the robot as a stream of phonemes. Numerous ways of representing the speech as syllabic segments are possible. Furthermore, the pronunciation of many words in spontaneous speech is variable. However, in line with research elsewhere, we observe that salient content words are more likely than function words to have consistent canonical representations; thus their relative frequency increases, as does their influence on the learner. Variable pronunciation may contribute to early word form acquisition. The importance of contingent interaction in real-time between teacher and learner is reflected by a reinforcement process, with variable success. The examination of individual cases may be more informative than group results. Nevertheless, word forms are usually produced by the robot after a few minutes of dialogue, employing a simple, real-time, frequency dependent mechanism. This work shows the potential of human-robot interaction systems in studies of the dynamics of early language acquisition

    Computer applications

    No full text

    Acoustic correlates of stress in young children\u27s speech

    No full text
    This study examined the acoustic correlates of stress in children\u27s productions of familiar words. Previous research has employed experimental words rather than familiar words to examine children\u27s phonetic marking of stress, or has not adequately controlled for phonetic environment. Subjects in this study included 22 children, aged 18-30 months, and 6 adults. Fundamental frequency, duration, and amplitude measures were extracted from stressed and unstressed syllables in two types of comparisons: one that controlled phonetic environment and syllable position (interword) and one that measured the relative effects of stress within the same word (intraword). When the tokens were analyzed on the basis of target stress pattern, results revealed no differences between adults and children in their acoustic marking of stress. Listener judgments showed that approximately 30% of children\u27s two-syllable productions were coded unreliably or were perceived as inaccurately stressed. Overall findings indicate that children control fundamental frequency, amplitude, and duration to derive perceptually identifiable stress contrasts in the majority of their productions but they are not completely adult-like in their marking of stress
    • 

    corecore