22 research outputs found

    Language specificity in cortical tracking of speech rhythm at the mora, syllable, and foot levels

    Get PDF
    Published: 05 August 2022Recent research shows that adults’ neural oscillations track the rhythm of the speech signal. However, the extent to which this tracking is driven by the acoustics of the signal, or by language-specific processing remains unknown. Here adult native listeners of three rhythmically different languages (English, French, Japanese) were compared on their cortical tracking of speech envelopes synthesized in their three native languages, which allowed for coding at each of the three language’s dominant rhythmic unit, respectively the foot (2.5 Hz), syllable (5 Hz), or mora (10 Hz) level. The three language groups were also tested with a sequence in a non-native language, Polish, and a non-speech vocoded equivalent, to investigate possible differential speech/nonspeech processing. The results first showed that cortical tracking was most prominent at 5 Hz (syllable rate) for all three groups, but the French listeners showed enhanced tracking at 5 Hz compared to the English and the Japanese groups. Second, across groups, there were no differences in responses for speech versus non-speech at 5 Hz (syllable rate), but there was better tracking for speech than for non-speech at 10 Hz (not the syllable rate). Together these results provide evidence for both language-general and language-specific influences on cortical tracking.In Australia, this work was supported by a Transdisciplinary and Innovation Grant from Australian Research Council Centre of Excellence in Dynamics of Language. In France, the work was funded by an ANR-DFG grant (ANR-16-FRAL-0007) and funds from LABEX EFL (ANR-10-LABX-0083) to TN, and a DIM Cerveau et Pensées grant (2013 MOBIBRAIN). In Japan, the work was funded by JSPS grant-in-aid for Scientific Research S(16H06319) and for Specially Promoted Research (20H05617), MEXT grant for Innovative Areas #4903 (17H06382) to RM

    Neural response development during distributional learning.

    Get PDF
    We investigated online electrophysiological components of distributional learning, specifically of tones by listeners of a non-tonal language. German listeners were presented with a bimodal distribution of syllables with lexical tones from a synthesized continuum based on Cantonese level tones. Tones were presented in sets of four standards (within-category tokens) followed by a deviant (across-category token). Mismatch negativity (MMN) was measured. Earlier behavioral data showed that exposure to this bimodal distribution improved both categorical perception and perceptual acuity for level tones [1]. In the present study we present analyses of the electrophysiological response recorded during this exposure, i.e., the development of the MMN response during distributional learning. This development over time is analyzed using Generalized Additive Mixed Models and results showed that the MMN amplitude increased for both within- and across-category tokens, reflecting higher perceptual acuity accompanying category formation. This is evidence that learners zooming in on phonological categories undergo neural changes associated with more accurate phonetic perception.This research was also supported by a Research Networking grant (ESF) NetwordS No. 6609 to NB and a Leiden University AMT Individual Researcher Grant to JSN

    Short-term exposure enhances perception of both between- and within-category acoustic information.

    Get PDF
    A critical question in speech research is how listeners use non-discrete acoustic cues for discrimination between discrete alternative messages (e.g. words). Previous studies have shown that distributional learning can improve listeners’ discrimination of non-native speech sounds. Less is known about effects of training on perception of within-category acoustic detail. The present research investigates adult listeners’ perception of and discrimination between lexical tones without training or after a brief training exposure. Native speakers of German (a language without lexical tone) heard a 13-step pitch continuum of the syllable /li:/. Two different tasks were used to assess sensitivity to acoustic differences on this continuum: a) pitch height estimation and b) AX discrimination. Participants performed these tasks either without exposure or after exposure to a bimodal distribution of the pitch continuum. The AX discrimination results show that exposure to a bimodal distribution enhanced discrimination at the category boundary (i.e. categorical perception) of high vs. low tones. Interestingly, the pitch estimation task results followed a categorisation (sigmoid) function without exposure, but a linear function after exposure, suggesting estimates became less categorical in this task. The results suggest that training exposure may enhance not only discrimination between contrastive speech sounds (consistent with previous studies), but also perception of withincategory acoustic differences. Different tasks may reveal different skills

    Categorical perception of lexical stress by adult listeners

    No full text
    Human perception is guided by abstract categorical representations established through experience in infancy. There is abundant evidence for categorical perception in speech processing; however, this research has focused on consonants (e.g., Liberman, Harris, Hoffman, & Griffith, 1957) and tones (e.g., Hallé, Chang, & Best, 2004). The present study investigates the possibility that lexical stress perception is also categorical, depending on properties of the native language. We hypothesized that listeners whose native language has words of different lexical stress patterns (e.g., German) should categorically differentiate between trochees and iambs, while listeners whose native language lacks such differences (e.g., French) should not. First, we created an 8-step continuum of the nonword /gaba/ from trochaic (GAba) to iambic (gaBA) produced by a speaker of German. Second, an analogue non-speech continuum was created in order to determine if effects would extend beyond linguistic stimuli. Results from two experiments showed that both native listeners of French and German are sensitive to fine-grained acoustic differences between tokens from the continuum, but only native listeners of German show categorical perception. Across tasks and groups, performance did not depend on whether speech or non-speech stimuli were perceived. Together, these results suggest that stress perception is subject to categorical perception that globally contrasts between trochees and iambs, but this perceptual effect depends on language experience with lexical stress

    Measuring socially motivated pronunciation differences

    No full text
    This paper applies a measure of linguistic distance to differences in pronunciation which have been observed as a consequence of modern speakers orienting themselves to standard languages and larger regions rather than local towns and villages, resulting in what we shall call regional speech. We examine regional speech, other local "varieties" in the Dutch of the Netherlands and Flanders, and also standard Netherlandic Dutch and Belgian Dutch.2 Because regional speech is difficult to study, as it may not constitute a linguistic variety in the usual sense of the word, we focus on the speech of professional announcers employed by regional radio stations. We examine their speech in light of Auer and Hinskens' (1996) cone-shaped model of the speech continuum, which includes REGIOLECTS, which they define as a sort of comprise between standard languages and local dialects (more below). In this examination we use a measure of pronunciation difference which has been successful in dialectology (see Nerbonne & Heeringa 2009 for an overview) and which has been demonstrated to be valid both for measuring dialect differences and also for measuring speech differences due to limited auditory acuity (cochlear implants). We thereby introduce a technique into sociolinguistics to measure the difference between regional speech and standard Dutch as well as the difference between regional speech and the local speech of towns and villages, providing a perspective on the issue of whether regional speech functions as "standard" within more restricted areas or whether it serves rather to mark regional identity

    The impact of phonological biases on mispronunciation sensitivity and novel accent adaptation

    No full text
    Katie Von Holzen, Sandrien van Ommen, Katherine S. White & Thierry Nazzi (2022) The Impact of Phonological Biases on Mispronunciation Sensitivity and Novel Accent Adaptation, Language Learning and Development, DOI: 10.1080/15475441.2022.2071717 Successful word recognition requires that listeners attend to differences that are phonemic in the language while also remaining flexible to the variation introduced by different voices and accents. Previous work has demonstrated that American-English-learning 19-month-olds are able to balance these demands: although one-off one-feature mispronunciations typically disrupt English-learning toddlers’ lexical access, they no longer do after toddlers are exposed to a novel accent in which these changes occur systematically (White & Aslin, 2011; White & Daub, 2021). The flexibility to deal with different types of variation may not be the same for toddlers learning different first languages, however, as language structure shapes early phonological biases. We examined French-learning 19-month-olds’ sensitivity and adaptation to a novel accent that shifted either the standard pronunciation of /a/ from [a] to [ɛ] (Experiment 1) or the standard pronunciation of /p/ from [p] to [t] (Experiment 2). In Experiment 1, French-learning toddlers recognized words with /a/ produced as [ɛ], regardless of whether they were previously exposed to an accent that contained this vowel shift or not. In Experiment 2, toddlers did not recognize words with /p/ pronounced as [t] at test unless they were first familiarized with an accent that contained this consonant shift. These findings are consistent with evidence that French-learning toddlers privilege consonants over vowels in lexical processing. Together with previous work, these results demonstrate both differences and similarities in how French- and English-learning children treat variation, in line with their language-specific phonological biases
    corecore