36 research outputs found

    Are words easier to learn from infant- than adult-directed speech? A quantitative corpus-based investigation

    Get PDF
    We investigate whether infant-directed speech (IDS) could facilitate word form learning when compared to adult-directed speech (ADS). To study this, we examine the distribution of word forms at two levels, acoustic and phonological, using a large database of spontaneous speech in Japanese. At the acoustic level we show that, as has been documented before for phonemes, the realizations of words are more variable and less discriminable in IDS than in ADS. At the phonological level, we find an effect in the opposite direction: the IDS lexicon contains more distinctive words (such as onomatopoeias) than the ADS counterpart. Combining the acoustic and phonological metrics together in a global discriminability score reveals that the bigger separation of lexical categories in the phonological space does not compensate for the opposite effect observed at the acoustic level. As a result, IDS word forms are still globally less discriminable than ADS word forms, even though the effect is numerically small. We discuss the implication of these findings for the view that the functional role of IDS is to improve language learnability.Comment: Draf

    Prosodic Bootstrapping of Clauses: Is it Language-Specific?

    Get PDF
    According to the Prosodic Bootstrapping Hypothesis, infants use prosody to support syntax acquisition (Morgan, 1986). Our previous work provides evidence that infants treat prosodically-marked units as moveable constituents. In order to investigate the mechanism underlying this effect, we tested Japanese-acquiring infants on their ability to use prosody to locate clauses in an English-based artificial grammar. The Japanese infants were able to learn from English prosody, suggesting that prosodic bootstrapping relies on prosody's general acoustic properties. It appears that prosodic cues to syntax are robust enough across languages to be used without extensive knowledge of language-specific prosody

    PAPER The development of a phonological illusion: a cross-linguistic study with Japanese and French infants

    Get PDF
    Abstract In adults, native language phonology has strong perceptual effects. Previous work has shown that Japanese speakers, unlike French speakers, break up illegal sequences of consonants with illusory vowels: they report hearing abna as abuna. To study the development of phonological grammar, we compared Japanese and French infants in a discrimination task. In Experiment 1, we observed that 14-month-old Japanese infants, in contrast to French infants, failed to discriminate phonetically varied sets of abna-type and abuna-type stimuli. In Experiment 2, 8-month-old French and Japanese did not differ significantly from each other. In Experiment 3, we found that, like adults, Japanese infants can discriminate abna from abuna when phonetic variability is reduced (single item). These results show that the phonologically induced ⁄ u ⁄ illusion is already experienced by Japanese infants at the age of 14 months. Hence, before having acquired many words of their language, they have grasped enough of their native phonological grammar to constrain their perception of speech sound sequences

    Language specificity in cortical tracking of speech rhythm at the mora, syllable, and foot levels

    Get PDF
    Published: 05 August 2022Recent research shows that adults’ neural oscillations track the rhythm of the speech signal. However, the extent to which this tracking is driven by the acoustics of the signal, or by language-specific processing remains unknown. Here adult native listeners of three rhythmically different languages (English, French, Japanese) were compared on their cortical tracking of speech envelopes synthesized in their three native languages, which allowed for coding at each of the three language’s dominant rhythmic unit, respectively the foot (2.5 Hz), syllable (5 Hz), or mora (10 Hz) level. The three language groups were also tested with a sequence in a non-native language, Polish, and a non-speech vocoded equivalent, to investigate possible differential speech/nonspeech processing. The results first showed that cortical tracking was most prominent at 5 Hz (syllable rate) for all three groups, but the French listeners showed enhanced tracking at 5 Hz compared to the English and the Japanese groups. Second, across groups, there were no differences in responses for speech versus non-speech at 5 Hz (syllable rate), but there was better tracking for speech than for non-speech at 10 Hz (not the syllable rate). Together these results provide evidence for both language-general and language-specific influences on cortical tracking.In Australia, this work was supported by a Transdisciplinary and Innovation Grant from Australian Research Council Centre of Excellence in Dynamics of Language. In France, the work was funded by an ANR-DFG grant (ANR-16-FRAL-0007) and funds from LABEX EFL (ANR-10-LABX-0083) to TN, and a DIM Cerveau et Pensées grant (2013 MOBIBRAIN). In Japan, the work was funded by JSPS grant-in-aid for Scientific Research S(16H06319) and for Specially Promoted Research (20H05617), MEXT grant for Innovative Areas #4903 (17H06382) to RM

    Functional Lateralization of Speech Processing in Adults and Children Who Stutter

    Get PDF
    Developmental stuttering is a speech disorder in fluency characterized by repetitions, prolongations, and silent blocks, especially in the initial parts of utterances. Although their symptoms are motor related, people who stutter show abnormal patterns of cerebral hemispheric dominance in both anterior and posterior language areas. It is unknown whether the abnormal functional lateralization in the posterior language area starts during childhood or emerges as a consequence of many years of stuttering. In order to address this issue, we measured the lateralization of hemodynamic responses in the auditory cortex during auditory speech processing in adults and children who stutter, including preschoolers, with near-infrared spectroscopy. We used the analysis–resynthesis technique to prepare two types of stimuli: (i) a phonemic contrast embedded in Japanese spoken words (/itta/ vs. /itte/) and (ii) a prosodic contrast (/itta/ vs. /itta?/). In the baseline blocks, only /itta/ tokens were presented. In phonemic contrast blocks, /itta/ and /itte/ tokens were presented pseudo-randomly, and /itta/ and /itta?/ tokens in prosodic contrast blocks. In adults and children who do not stutter, there was a clear left-hemispheric advantage for the phonemic contrast compared to the prosodic contrast. Adults and children who stutter, however, showed no significant difference between the two stimulus conditions. A subject-by-subject analysis revealed that not a single subject who stutters showed a left advantage in the phonemic contrast over the prosodic contrast condition. These results indicate that the functional lateralization for auditory speech processing is in disarray among those who stutter, even at preschool age. These results shed light on the neural pathophysiology of developmental stuttering

    Nasal Consonant Discrimination in Infant- and Adult-Directed Speech

    No full text
    Ludusan B, Jorschick A, Mazuka R. Nasal Consonant Discrimination in Infant- and Adult-Directed Speech. In: Interspeech 2019. ISCA; 2019.Infant-directed speech (IDS) is thought to play a facilitating role in language acquisition, by simplifying the input infants receive. In particular, the hypothesis that the acoustic level is enhanced to make the input more clear for infants, has been extensively studied in the case of vowels, but less so in the case of consonants. An investigation into how nasal consonants can be discriminated in infant- compared to adult-directed speech (ADS) was performed, on a corpus of Japanese mother-infant spontaneous conversations, by examining all bilabial and alveolar nasals occurring in intervocalic position. The Pearson correlation between corresponding spectrum slices of nasal consonants, in identical vowel contexts, was employed as similarity measure and a statistical model was fit using this information. It revealed a decrease in similarity between the nasal classes, in IDS compared to ADS, although the effect was not statistically significant. We confirmed these results, using an unsupervised machine learning algorithm to discriminate between the two nasal classes, obtaining similar classification performance in IDS and ADS. We discuss our findings in the context of the current literature on infant-directed speech

    Does Infant-Directed Speech Help Phonetic Learning? A Machine Learning Investigation

    Get PDF
    Ludusan B, Mazuka R, Dupoux E. Does Infant-Directed Speech Help Phonetic Learning? A Machine Learning Investigation. Cognitive Science. 2021;45(5): e12946.A prominent hypothesis holds that by speaking to infants in infant-directed speech (IDS) as opposed to adult-directed speech (ADS), parents help them learn phonetic categories. Specifically, two characteristics of IDS have been claimed to facilitate learning: hyperarticulation, which makes the categories more separable, and variability, which makes the generalization more robust. Here, we test the separability and robustness of vowel category learning on acoustic representations of speech uttered by Japanese adults in ADS, IDS (addressed to 18- to 24-month olds), or read speech (RS). Separability is determined by means of a distance measure computed between the five short vowel categories of Japanese, while robustness is assessed by testing the ability of six different machine learning algorithms trained to classify vowels to generalize on stimuli spoken by a novel speaker in ADS. Using two different speech representations, we find that hyperarticulated speech, in the case of RS, can yield better separability, and that increased between-speaker variability in ADS can yield, for some algorithms, more robust categories. However, these conclusions do not apply to IDS, which turned out to yield neither more separable nor more robust categories compared to ADS inputs. We discuss the usefulness of machine learning algorithms run on real data to test hypotheses about the functional role of IDS. © 2021 The Authors. Cognitive Science published by Wiley Periodicals LLC on behalf of Cognitive Science Society (CSS)

    Development of fricative sound perception in Korean infants: The role of language experience and infants' initial sensitivity.

    No full text
    In this paper, we report data on the development of Korean infants' perception of a rare fricative phoneme distinction. Korean fricative consonants have received much interest in the linguistic community due to the language's distinct categorization of sounds. Unlike many fricative contrasts utilized in most of the world's languages, Korean fricatives (/s*/-/s/) are all voiceless. Moreover, compared with other sound categories, fricatives have received very little attention in the speech perception development field and no studies thus far have examined Korean infants' development of native phonology in this domain. Using a visual habituation paradigm, we tested 4‒6-month-old and 7‒9-month-old Korean infants on their abilities to discriminate the Korean fricative pair in the [a] vowel context, /s*a/-/sa/, which can be distinguished based on acoustic cues, such as the durations of aspiration and frication noise. Korean infants older than 7 months were able to reliably discriminate the fricative pair but younger infants did not show clear signs of such discrimination. These results add to the growing evidence that there are native sound contrasts infants cannot discriminate early on without a certain amount of language exposure, providing further data to help delineate the specific nature of early perceptual capacity
    corecore