1,867 research outputs found

    Speech rhythm: a metaphor?

    Get PDF
    Is speech rhythmic? In the absence of evidence for a traditional view that languages strive to coordinate either syllables or stress-feet with regular time intervals, we consider the alternative that languages exhibit contrastive rhythm subsisting merely in the alternation of stronger and weaker elements. This is initially plausible, particularly for languages with a steep ‘prominence gradient’, i.e. a large disparity between stronger and weaker elements; but we point out that alternation is poorly achieved even by a ‘stress-timed’ language such as English, and, historically, languages have conspicuously failed to adopt simple phonological remedies that would ensure alternation. Languages seem more concerned to allow ‘syntagmatic contrast’ between successive units and to use durational effects to support linguistic functions than to facilitate rhythm. Furthermore, some languages (e.g. Tamil, Korean) lack the lexical prominence which would most straightforwardly underpin prominence alternation. We conclude that speech is not incontestibly rhythmic, and may even be antirhythmic. However, its linguistic structure and patterning allow the metaphorical extension of rhythm in varying degrees and in different ways depending on the language, and that it is this analogical process which allows speech to be matched to external rhythms

    Language discrimination by newborns: Teasing apart phonotactic, rhythmic, and intonational cues

    Get PDF
    Speech rhythm has long been claimed to be a useful bootstrapping cue in the very first steps of language acquisition. Previous studies have suggested that newborn infants do categorize varieties of speech rhythm, as demonstrated by their ability to discriminate between certain languages. However, the existing evidence is not unequivocal: in previous studies, stimuli discriminated by newborns always contained additional speech cues on top of rhythm. Here, we conducted a series of experiments assessing discrimination between Dutch and Japanese by newborn infants, using a speech resynthesis technique to progressively degrade non-rhythmical properties of the sentences. When the stimuli are resynthesized using identical phonemes and artificial intonation contours for the two languages, thereby preserving only their rhythmic and broad phonotactic structure, newborns still seem to be able to discriminate between the two languages, but the effect is weaker than when intonation is present. This leaves open the possibility that the temporal correlation between intonational and rhythmic cues might actually facilitate the processing of speech rhythm

    Language identification with suprasegmental cues: A study based on speech resynthesis

    Get PDF
    This paper proposes a new experimental paradigm to explore the discriminability of languages, a question which is crucial to the child born in a bilingual environment. This paradigm employs the speech resynthesis technique, enabling the experimenter to preserve or degrade acoustic cues such as phonotactics, syllabic rhythm or intonation from natural utterances. English and Japanese sentences were resynthesized, preserving broad phonotactics, rhythm and intonation (Condition 1), rhythm and intonation (Condition 2), intonation only (Condition 3), or rhythm only (Condition 4). The findings support the notion that syllabic rhythm is a necessary and sufficient cue for French adult subjects to discriminate English from Japanese sentences. The results are consistent with previous research using low-pass filtered speech, as well as with phonological theories predicting rhythmic differences between languages. Thus, the new methodology proposed appears to be well-suited to study language discrimination. Applications for other domains of psycholinguistic research and for automatic language identification are considered

    Rhythmic unit extraction and modelling for automatic language identification

    Get PDF
    International audienceThis paper deals with an approach to Automatic Language Identification based on rhythmic modelling. Beside phonetics and phonotactics, rhythm is actually one of the most promising features to be considered for language identification, even if its extraction and modelling are not a straightforward issue. Actually, one of the main problems to address is what to model. In this paper, an algorithm of rhythm extraction is described: using a vowel detection algorithm, rhythmic units related to syllables are segmented. Several parameters are extracted (consonantal and vowel duration, cluster complexity) and modelled with a Gaussian Mixture. Experiments are performed on read speech for 7 languages (English, French, German, Italian, Japanese, Mandarin and Spanish) and results reach up to 86 ± 6% of correct discrimination between stress-timed mora-timed and syllable-timed classes of languages, and to 67 ± 8% percent of correct language identification on average for the 7 languages with utterances of 21 seconds. These results are commented and compared with those obtained with a standard acoustic Gaussian mixture modelling approach (88 ± 5% of correct identification for the 7-languages identification task)

    Perception of linguistic rhythm by newborn infants

    Get PDF
    Previous studies have shown that newborn infants are able to discriminate between certain languages, and it has been suggested that they do so by categorizing varieties of speech rhythm. However, in order to confirm this hypothesis, it is necessary to show that language discrimination is still performed by newborns when all speech cues other than rhythm are removed. Here, we conducted a series of experiments assessing discrimination between Dutch and Japanese by newborn infants, using a speech resynthesis technique to progressively degrade non-rhythmical properties of the sentences. When the stimuli are resynthesized using identical phonemes and artificial intonation contours for the two languages, thereby preserving only their rhythmic structure, newborns are still able to discriminate the languages. We conclude that new-borns are able to classify languages according to their type of rhythm, and that this ability may help them bootstrap other phonological properties of their native language

    The Acoustic Correlates of Stress-Shifting Suffixes in Native and Nonnative English

    Get PDF
    Although laboratory phonology techniques have been widely employed to discover the interplay between the acoustic correlates of English Lexical Stress (ELS)–fundamental frequency, duration, and intensity - studies on ELS in polysyllabic words are rare, and cross-linguistic acoustic studies in this area are even rarer. Consequently, the effects of language experience on L2 lexical stress acquisition are not clear. This investigation of adult Arabic (Saudi Arabian) and Mandarin (Mainland Chinese) speakers analyzes their ELS production in tokens with seven different stress-shifting suffixes; i.e., Level 1 [+cyclic] derivations to phonologists. Stress productions are then systematically analyzed and compared with those of speakers of Midwest American English using the acoustic phonetic software, Praat. In total, one hundred subjects participated in the study, spread evenly across the three language groups, and 2,125 vowels in 800 spectrograms were analyzed (excluding stress placement and pronunciation errors). Nonnative speakers completed a sociometric survey prior to recording so that statistical sampling techniques could be used to evaluate acquisition of accurate ELS production. The speech samples of native speakers were analyzed to provide norm values for cross-reference and to provide insights into the proposed Salience Hierarchy of the Acoustic Correlates of Stress (SHACS). The results support the notion that a SHACS does exist in the L1 sound system, and that native-like command of this system through accurate ELS production can be acquired by proficient L2 learners via increased L2 input. Other findings raise questions as to the accuracy of standard American English dictionary pronunciations as well as the generalizability of claims made about the acoustic properties of tonic accent shift

    An acoustic investigation of the developmental trajectory of lexical stress contrastivity in Italian

    Get PDF
    We examined whether typically developing Italian children exhibit adult-like stress contrastivity for word productions elicited via a picture naming task (n=25 children aged 3\u20135 years and 27 adults). Stimuli were 10 trisyllabic Italian words; half began with a weak\u2013strong (WS) pattern of lexical stress across the initial 2 syllables, as in patata, while the other half began with a strong\u2013weak (SW) pattern, as in gomito. Word productions that were identified as correct via perceptual judgement were analysed acoustically. The initial 2 syllables of each correct word production were analysed in terms of the duration, peak intensity, and peak fundamental frequency of the vowels using a relative measure of contrast\u2014the normalised pairwise variability index (PVI). Results across the majority of measures showed that children\u2019s stress contrastivity was adult-like. However, the data revealed that children\u2019s contrastivity for trisyllabic words beginning with a WS pattern was not adult-like regarding the PVI for vowel duration: children showed less contrastivity than adults. This effect appeared to be driven by differences in word-medial gemination between children and adults. Results are compared with data from a recent acoustic study of stress contrastivity in English speaking children and adults and discussed in relation to language-specific and physiological motor-speech constraints on production
    corecore