49,945 research outputs found

    Can you see what i am talking about? Human speech triggers referential expectation in four-month-old infants

    Get PDF
    Infants’ sensitivity to selectively attend to human speech and to process it in a unique way has been widely reported in the past. However, in order to successfully acquire language, one should also understand that speech is a referential, and that words can stand for other entities in the world. While there has been some evidence showing that young infants can make inferences about the communicative intentions of a speaker, whether they would also appreciate the direct relationship between a specific word and its referent, is still unknown. In the present study we tested four-month-old infants to see whether they would expect to find a referent when they hear human speech. Our results showed that compared to other auditory stimuli or to silence, when infants were listening to speech they were more prepared to find some visual referents of the words, as signalled by their faster orienting towards the visual objects. Hence, our study is the first to report evidence that infants at a very young age already understand the referential relationship between auditory words and physical objects, thus show a precursor in appreciating the symbolic nature of language, even if they do not understand yet the meanings of words

    Gaze following in human infants depends on communicative signals

    Get PDF
    Humans are extremely sensitive to ostensive signals, like eye contact or having their name called, that indicate someone's communicative intention toward them [1,2,3]. Infants also pay attention to these signals [4,5,6], but it is unknown whether they appreciate their significance in the initiation of communicative acts. In two experiments, we employed video presentation of an actor turning toward one of two objects and recorded infants' gaze-following behavior [7,8,9,10,11,12,13] with eye-tracking techniques [11,12]. We found that 6-month-old infants followed the adult's gaze (a potential communicative-referential signal) toward an object only when such an act is preceded by ostensive cues such as direct gaze (experiment 1) and infant-directed speech (experiment 2). Such a link between the presence of ostensive signals and gaze following suggests that this behavior serves a functional role in assisting infants to effectively respond to referential communication directed to them. Whereas gaze following in many nonhuman species supports social information gathering [14,15,16,17,18], in humans it initially appears to reflect the expectation of a more active, communicative role from the information source

    Labels direct infants’ attention to commonalities during novel category learning

    Get PDF
    Recent studies have provided evidence that labeling can influence the outcome of infants’ visual categorization. However, what exactly happens during learning remains unclear. Using eye-tracking, we examined infants’ attention to object parts during learning. Our analysis of looking behaviors during learning provide insights going beyond merely observing the learning outcome. Both labeling and non-labeling phrases facilitated category formation in 12-month-olds but not 8-month-olds (Experiment 1). Non-linguistic sounds did not produce this effect (Experiment 2). Detailed analyses of infants’ looking patterns during learning revealed that only infants who heard labels exhibited a rapid focus on the object part successive exemplars had in common. Although other linguistic stimuli may also be beneficial for learning, it is therefore concluded that labels have a unique impact on categorization

    The joint role of trained, untrained, and observed actions at the origins of goal recognition

    Get PDF
    Recent findings across a variety of domains reveal the benefits of self-produced experience on object exploration, object knowledge, attention, and action perception. The influence of active experience may be particularly important in infancy, when motor development is undergoing great changes. Despite the importance of self-produced experience, we know that infants and young children are eventually able to gain knowledge through purely observational experience. In the current work, three-month-old infants were given experience with object-directed actions in one of three forms and their recognition of the goal of grasping actions was then assessed in a habituation paradigm. All infants were given the chance to manually interact with the toys without assistance (a difficult task for most three-month-olds). Two of the three groups were then given additional experience with object-directed actions, either through active training (in which Velcro mittens helped infants act more efficiently) or observational training. Findings support the conclusion that self-produced experience is uniquely informative for action perception and suggest that individual differences in spontaneous motor activity may interact with observational experience to inform action perception early in life.PostprintPeer reviewe

    Effects of simultaneous speech and sign on infants’ attention to spoken language

    Get PDF
    Objectives: To examine the hypothesis that infants receiving a degraded auditory signal have more difficulty segmenting words from fluent speech if familiarized with the words presented in both speech and sign compared to familiarization with the words presented in speech only. Study Design: Experiment utilizing an infant-controlled visual preference procedure. Methods: Twenty 8.5-month-old normal-hearing infants completed testing. Infants were familiarized with repetitions of words in either the speech + sign (n = 10) or the speech only (n = 10) condition. Results: Infants were then presented with four six-sentence passages using an infant-controlled visual preference procedure. Every sentence in two of the passages contained the words presented in the familiarization phase, whereas none of the sentences in the other two passages contained familiar words.Infants exposed to the speech + sign condition looked at familiar word passages for 15.3 seconds and at nonfamiliar word passages for 15.6 seconds, t (9) = -0.130, p = .45. Infants exposed to the speech only condition looked at familiar word passages for 20.9 seconds and to nonfamiliar word passages for 15.9 seconds. This difference was statistically significant, t (9) = 2.076, p = .03. Conclusions: Infants\u27 ability to segment words from degraded speech is negatively affected when these words are initially presented in simultaneous speech and sign. The current study suggests that a decreased ability to segment words from fluent speech may contribute towards the poorer performance of pediatric cochlear implant recipients in total communication settings on a wide range of spoken language outcome measures

    The listening talker: A review of human and algorithmic context-induced modifications of speech

    Get PDF
    International audienceSpeech output technology is finding widespread application, including in scenarios where intelligibility might be compromised - at least for some listeners - by adverse conditions. Unlike most current algorithms, talkers continually adapt their speech patterns as a response to the immediate context of spoken communication, where the type of interlocutor and the environment are the dominant situational factors influencing speech production. Observations of talker behaviour can motivate the design of more robust speech output algorithms. Starting with a listener-oriented categorisation of possible goals for speech modification, this review article summarises the extensive set of behavioural findings related to human speech modification, identifies which factors appear to be beneficial, and goes on to examine previous computational attempts to improve intelligibility in noise. The review concludes by tabulating 46 speech modifications, many of which have yet to be perceptually or algorithmically evaluated. Consequently, the review provides a roadmap for future work in improving the robustness of speech output

    Development of neural responses to hearing their own name in infants at low and high risk for autism spectrum disorder

    Get PDF
    The own name is a salient stimulus, used by others to initiate social interaction. Typically developing infants orient towards the sound of their own name and exhibit enhanced event-related potentials (ERP) at 5 months. The lack of orientation to the own name is considered to be one of the earliest signs of autism spectrum disorder (ASD). In this study, we investigated ERPs to hearing the own name in infants at high and low risk for ASD, at 10 and 14 months. We hypothesized that low-risk infants would exhibit enhanced frontal ERP responses to their own name compared to an unfamiliar name, while high-risk infants were expected to show attenuation or absence of this difference in their ERP responses. In contrast to expectations, we did not find enhanced ERPs to own name in the low-risk group. However, the high-risk group exhibited attenuated frontal positive-going activity to their own name compared to an unfamiliar name and compared to the low-risk group, at the age of 14 months. These results suggest that infants at high risk for ASD start to process their own name differently shortly after one year of age, a period when frontal brain development is happening at a fast rate

    Newborns' preference for face-relevant stimuli: effects of contrast polarity

    Get PDF
    There is currently no agreement as to how specific or general are the mechanisms underlying newborns' face preferences. We address this issue by manipulating the contrast polarity of schematic and naturalistic face-related images and assessing the preferences of newborns. We find that for both schematic and naturalistic face images, the contrast polarity is important. Newborns did not show a preference for an upright face-related image unless it was composed of darker areas around the eyes and mouth. This result is consistent with either sensitivity to the shadowed areas of a face with overhead (natural) illumination and/or to the detection of eye contact

    Domain general learning: Infants use social and non-social cues when learning object statistics.

    Get PDF
    Previous research has shown that infants can learn from social cues. But is a social cue more effective at directing learning than a non-social cue? This study investigated whether 9-month-old infants (N = 55) could learn a visual statistical regularity in the presence of a distracting visual sequence when attention was directed by either a social cue (a person) or a non-social cue (a rectangle). The results show that both social and non-social cues can guide infants' attention to a visual shape sequence (and away from a distracting sequence). The social cue more effectively directed attention than the non-social cue during the familiarization phase, but the social cue did not result in significantly stronger learning than the non-social cue. The findings suggest that domain general attention mechanisms allow for the comparable learning seen in both conditions
    corecore