35,439 research outputs found

    Statistical distributions of consonant variants in infant-directed speech: evidence that /t/ may be exceptional

    Get PDF
    Statistical distributions of phonetic variants in spoken language influence speech perception for both language learners and mature users. We theorized that patterns of phonetic variant processing of consonants demonstrated by adults might stem in part from patterns of early exposure to statistics of phonetic variants in infant-directed (ID) speech. In particular, we hypothesized that ID speech might involve greater proportions of canonical /t/ pronunciations compared to adult-directed (AD) speech in at least some phonological contexts. This possibility was tested using a corpus of spontaneous speech of mothers speaking to other adults, or to their typically-developing infant. Tokens of word-final alveolar stops – including /t/, /d/, and the nasal stop /n/ – were examined in assimilable contexts (i.e., those followed by a word-initial labial and/or velar); these were classified as canonical, assimilated, deleted, or glottalized. Results confirmed that there were significantly more canonical pronunciations in assimilable contexts in ID compared with AD speech, an effect which was driven by the phoneme /t/. These findings suggest that at least in phonological contexts involving possible assimilation, children are exposed to more canonical /t/ variant pronunciations than adults are. This raises the possibility that perceptual processing of canonical /t/ may be partly attributable to exposure to canonical /t/ variants in ID speech. Results support the need for further research into how statistics of variant pronunciations in early language input may shape speech processing across the lifespan

    Are you talking to me? Neural activations in 6-month-old infants in response to being addressed during natural interactions

    Get PDF
    Human interactions are guided by continuous communication among the parties involved, in which verbal communication plays a primary role. However, speech does not necessarily reveal to whom it is addressed, especially for young infants who are unable to decode its semantic content. To overcome such difficulty, adults often explicitly mark their communication as infant-directed. In the present study we investigated whether ostensive signals, which would disambiguate the infant as the addressee of a communicative act, would modulate the brain responses of 6-month-old infants to speech and gestures in an ecologically valid setting. In Experiment 1, we tested whether the gaze direction of the speaker modulates cortical responses to infant-direct speech. To provide a naturalistic environment, two infants and their parents participated at the same time. In Experiment 2, we tested whether a similar modulation of the cortical response would be obtained by varying the intonation (infant versus adult directed speech) of the speech during face-to-face communication, one on one. The results of both experiments indicated that only the combination of ostensive signals (infant directed speech and direct gaze) led to enhanced brain activation. This effect was indicated by responses localized in regions known to be involved in processing auditory and visual aspects of social communication. This study also demonstrated the potential of fNIRS as a tool for studying neural responses in naturalistic scenarios, and for simultaneous measurement of brain function in multiple participants

    The listening talker: A review of human and algorithmic context-induced modifications of speech

    Get PDF
    International audienceSpeech output technology is finding widespread application, including in scenarios where intelligibility might be compromised - at least for some listeners - by adverse conditions. Unlike most current algorithms, talkers continually adapt their speech patterns as a response to the immediate context of spoken communication, where the type of interlocutor and the environment are the dominant situational factors influencing speech production. Observations of talker behaviour can motivate the design of more robust speech output algorithms. Starting with a listener-oriented categorisation of possible goals for speech modification, this review article summarises the extensive set of behavioural findings related to human speech modification, identifies which factors appear to be beneficial, and goes on to examine previous computational attempts to improve intelligibility in noise. The review concludes by tabulating 46 speech modifications, many of which have yet to be perceptually or algorithmically evaluated. Consequently, the review provides a roadmap for future work in improving the robustness of speech output

    Infants segment words from songs - an EEG study

    No full text
    Children’s songs are omnipresent and highly attractive stimuli in infants’ input. Previous work suggests that infants process linguistic–phonetic information from simplified sung melodies. The present study investigated whether infants learn words from ecologically valid children’s songs. Testing 40 Dutch-learning 10-month-olds in a familiarization-then-test electroencephalography (EEG) paradigm, this study asked whether infants can segment repeated target words embedded in songs during familiarization and subsequently recognize those words in continuous speech in the test phase. To replicate previous speech work and compare segmentation across modalities, infants participated in both song and speech sessions. Results showed a positive event-related potential (ERP) familiarity effect to the final compared to the first target occurrences during both song and speech familiarization. No evidence was found for word recognition in the test phase following either song or speech. Comparisons across the stimuli of the present and a comparable previous study suggested that acoustic prominence and speech rate may have contributed to the polarity of the ERP familiarity effect and its absence in the test phase. Overall, the present study provides evidence that 10-month-old infants can segment words embedded in songs, and it raises questions about the acoustic and other factors that enable or hinder infant word segmentation from songs and speech

    Are words easier to learn from infant- than adult-directed speech? A quantitative corpus-based investigation

    Get PDF
    We investigate whether infant-directed speech (IDS) could facilitate word form learning when compared to adult-directed speech (ADS). To study this, we examine the distribution of word forms at two levels, acoustic and phonological, using a large database of spontaneous speech in Japanese. At the acoustic level we show that, as has been documented before for phonemes, the realizations of words are more variable and less discriminable in IDS than in ADS. At the phonological level, we find an effect in the opposite direction: the IDS lexicon contains more distinctive words (such as onomatopoeias) than the ADS counterpart. Combining the acoustic and phonological metrics together in a global discriminability score reveals that the bigger separation of lexical categories in the phonological space does not compensate for the opposite effect observed at the acoustic level. As a result, IDS word forms are still globally less discriminable than ADS word forms, even though the effect is numerically small. We discuss the implication of these findings for the view that the functional role of IDS is to improve language learnability.Comment: Draf

    More is more in language learning:reconsidering the less-is-more hypothesis

    Get PDF
    The Less-is-More hypothesis was proposed to explain age-of-acquisition effects in first language (L1) acquisition and second language (L2) attainment. We scrutinize different renditions of the hypothesis by examining how learning outcomes are affected by (1) limited cognitive capacity, (2) reduced interference resulting from less prior knowledge, and (3) simplified language input. While there is little-to-no evidence of benefits of limited cognitive capacity, there is ample support for a More-is-More account linking enhanced capacity with better L1- and L2-learning outcomes, and reduced capacity with childhood language disorders. Instead, reduced prior knowledge (relative to adults) may afford children with greater flexibility in inductive inference; this contradicts the idea that children benefit from a more constrained hypothesis space. Finally, studies of childdirected speech (CDS) confirm benefits from less complex input at early stages, but also emphasize how greater lexical and syntactic complexity of the input confers benefits in L1-attainment

    Can you see what i am talking about? Human speech triggers referential expectation in four-month-old infants

    Get PDF
    Infants’ sensitivity to selectively attend to human speech and to process it in a unique way has been widely reported in the past. However, in order to successfully acquire language, one should also understand that speech is a referential, and that words can stand for other entities in the world. While there has been some evidence showing that young infants can make inferences about the communicative intentions of a speaker, whether they would also appreciate the direct relationship between a specific word and its referent, is still unknown. In the present study we tested four-month-old infants to see whether they would expect to find a referent when they hear human speech. Our results showed that compared to other auditory stimuli or to silence, when infants were listening to speech they were more prepared to find some visual referents of the words, as signalled by their faster orienting towards the visual objects. Hence, our study is the first to report evidence that infants at a very young age already understand the referential relationship between auditory words and physical objects, thus show a precursor in appreciating the symbolic nature of language, even if they do not understand yet the meanings of words

    Development of neural responses to hearing their own name in infants at low and high risk for autism spectrum disorder

    Get PDF
    The own name is a salient stimulus, used by others to initiate social interaction. Typically developing infants orient towards the sound of their own name and exhibit enhanced event-related potentials (ERP) at 5 months. The lack of orientation to the own name is considered to be one of the earliest signs of autism spectrum disorder (ASD). In this study, we investigated ERPs to hearing the own name in infants at high and low risk for ASD, at 10 and 14 months. We hypothesized that low-risk infants would exhibit enhanced frontal ERP responses to their own name compared to an unfamiliar name, while high-risk infants were expected to show attenuation or absence of this difference in their ERP responses. In contrast to expectations, we did not find enhanced ERPs to own name in the low-risk group. However, the high-risk group exhibited attenuated frontal positive-going activity to their own name compared to an unfamiliar name and compared to the low-risk group, at the age of 14 months. These results suggest that infants at high risk for ASD start to process their own name differently shortly after one year of age, a period when frontal brain development is happening at a fast rate

    Effects of simultaneous speech and sign on infants’ attention to spoken language

    Get PDF
    Objectives: To examine the hypothesis that infants receiving a degraded auditory signal have more difficulty segmenting words from fluent speech if familiarized with the words presented in both speech and sign compared to familiarization with the words presented in speech only. Study Design: Experiment utilizing an infant-controlled visual preference procedure. Methods: Twenty 8.5-month-old normal-hearing infants completed testing. Infants were familiarized with repetitions of words in either the speech + sign (n = 10) or the speech only (n = 10) condition. Results: Infants were then presented with four six-sentence passages using an infant-controlled visual preference procedure. Every sentence in two of the passages contained the words presented in the familiarization phase, whereas none of the sentences in the other two passages contained familiar words.Infants exposed to the speech + sign condition looked at familiar word passages for 15.3 seconds and at nonfamiliar word passages for 15.6 seconds, t (9) = -0.130, p = .45. Infants exposed to the speech only condition looked at familiar word passages for 20.9 seconds and to nonfamiliar word passages for 15.9 seconds. This difference was statistically significant, t (9) = 2.076, p = .03. Conclusions: Infants\u27 ability to segment words from degraded speech is negatively affected when these words are initially presented in simultaneous speech and sign. The current study suggests that a decreased ability to segment words from fluent speech may contribute towards the poorer performance of pediatric cochlear implant recipients in total communication settings on a wide range of spoken language outcome measures
    corecore