21 research outputs found

    Visual speech contributes to phonetic learning in 6-month-old infants

    No full text
    Previous research has shown that infants match vowel sounds to facial displays of vowel articulation [Kuhl, P. K., & Meltzoff, A. N. (1982). The bimodal perception of speech in infancy. Science, 218, 1138–1141; Patterson, M. L., & Werker, J. F. (1999). Matching phonetic information in lips and voice is robust in 4.5-month-old infants. Infant Behaviour & Development, 22, 237–247], and integrate seen and heard speech sounds [Rosenblum, L. D., Schmuckler, M. A., & Johnson, J. A. (1997). The McGurk effect in infants. Perception & Psychophysics, 59, 347–357; Burnham, D., & Dodd, B. (2004). Auditory-visual speech integration by prelinguistic infants: Perception of an emergent consonant in the McGurk effect. Developmental Psychobiology, 45, 204–220]. However, the role of visual speech in language development remains unknown. Our aim was to determine whether seen articulations enhance phoneme discrimination, thereby playing a role in phonetic category learning. We exposed 6-month-old infants to speech sounds from a restricted range of a continuum between /ba/ and /da/, following a unimodal frequency distribution. Synchronously with these speech sounds, one group of infants (the two-category group) saw a visual articulation of a canonical /ba/ or /da/, with the two alternative visual articulations, /ba/ and /da/, being presented according to whether the auditory token was on the /ba/ or /da/ side of the midpoint of the continuum. Infants in a second (one-category) group were presented with the same unimodal distribution of speech sounds, but every token for any particular infant was always paired with the same syllable, either a visual /ba/ or a visual /da/. A stimulus-alternation preference procedure following the exposure revealed that infants in the former, and not in the latter, group discriminated the /ba/–/da/ contrast. These results not only show that visual information about speech articulation enhances phoneme discrimination, but also that it may contribute to the learning of phoneme boundaries in infancy

    Indexical and linguistic processing by 12-month-olds: discrimination of speaker, accent and vowel differences

    Get PDF
    Infants preferentially discriminate between speech tokens that cross native category boundaries prior to acquiring a large receptive vocabulary, implying a major role for unsupervised distributional learning strategies in phoneme acquisition in the first year of life. Multiple sources of between-speaker variability contribute to children’s language input and thus complicate the problem of distributional learning. Adults resolve this type of indexical variability by adjusting their speech processing for individual speakers. For infants to handle indexical variation in the same way, they must be sensitive to both linguistic and indexical cues. To assess infants’ sensitivity to and relative weighting of indexical and linguistic cues, we familiarized 12-month-old infants to tokens of a vowel produced by one speaker, and tested their listening preference to trials containing a vowel category change produced by the same speaker (linguistic information), and the same vowel category produced by another speaker of the same or a different accent (indexical information). Infants noticed linguistic and indexical differences, suggesting that both are salient in infant speech processing. Future research should explore how infants weight these cues in a distributional learning context that contains both phonetic and indexical variation

    On the influence of sensor morphology on vergence

    Get PDF
    In the field of developmental robotics, a lot of attention has been devoted to algorithms that allow agents to build up skills through sensorimotor interaction. Such interaction is largely affected by the agent's morphology, that is, its shape, limb articulation, as well as the position and density of sensors on its body surface. Despite its importance, the impact of morphology on behavior has not been systematically addressed. In this paper, we take inspiration from the human vision system, and demonstrate using a binocular active vision platform why sensor morphology in combination with other properties of the body, are essential conditions to achieve coordinated visual behavior (here, vergence). Specifically, to evaluate the effect of sensor morphology on behavior, we present an information-theoretic analysis quantifying the statistical regularities induced through sensorimotor interaction. Our results show that only for an adequate sensor morphology, vergence increases the amount of information structure in the sensorimotor loop
    corecore