4,155 research outputs found

    Language-universal constraints on the segmentation of English

    Get PDF
    Two word-spotting experiments are reported that examine whether the Possible-Word Constraint (PWC) [1] is a language-specific or language-universal strategy for the segmentation of continuous speech. The PWC disfavours parses which leave an impossible residue between the end of a candidate word and a known boundary. The experiments examined cases where the residue was either a CV syllable with a lax vowel, or a CVC syllable with a schwa. Although neither syllable context is a possible word in English, word-spotting in both contexts was easier than with a context consisting of a single consonant. The PWC appears to be language-universal rather than language-specific

    Spoken word classification in children and adults

    Get PDF
    Purpose: Preschool children often have difficulties in word classification, despite good speech perception and production. Some researchers suggest they represent words using phonetic features rather than phonemes. We examine whether there is a progression from feature based to phoneme based processing across age groups, and whether responses are consistent across tasks and stimuli. Method: In Study 1, 120 3 to 5 year old children completed three tasks assessing use of phonetic features in classification, with an additional 58 older children completing one of the three tasks. In Study 2, all of the children, together with an additional adult sample, completed a nonword learning task. Results: In all four tasks, children classified words sharing phonemes as similar. In addition, children regarded words as similar if they shared manner of articulation, particularly word-finally. Adults also showed this sensitivity to manner, but across the tasks there was a pattern of increasing use of phonemic information with age. Conclusions: Children tend to classify words as similar if they share phonemes or share manner of articulation word finally. Use of phonemic information becomes more common with age. These findings are in line with the theory that phonological representations become more detailed in the preschool years

    Epenthetic vowels in Japanese: A perceptual illusion?

    Get PDF
    In four cross-linguistic experiments comparing French and Japanese hearers, we found that the phonotactic properties of Japanese (very reduced set of syllable types) induce Japanese listeners to perceive ``illusory'' vowels inside consonant clusters in VCCV stimuli. In Experiments 1 and 2, we used a continuum of stimuli ranging from no vowel (e.g. ebzo) to a full vowel between the consonants (e.g. ebuzo). Japanese, but not French participants, reported the presence of a vowel [u] between consonants, even in stimuli with no vowel. A speeded ABX discrimination paradigm was used in Experiments 3 and 4, and revealed that Japanese participants had trouble discriminating between VCCV and VCuCV stimuli. French participants, in contrast had problems discriminating items that differ in vowel length (ebuzo vs. ebuuzo), a distinctive contrast in Japanese but not in French. We conclude that models of speech perception have to be revised to account for phonotactically-based assimilations

    Speeded detection of vowels and steady-state consonants

    Get PDF
    We report two experiments in which vowels and steady-state consonants served as targets in a speeded detection task. In the first experiment, two vowels were compared with one voiced and once unvoiced fricative. Response times (RTs) to the vowels were longer than to the fricatives. The error rate was higher for the consonants. Consonants in word-final position produced the shortest RTs, For the vowels, RT correlated negatively with target duration. In the second experiment, the same two vowel targets were compared with two nasals. This time there was no significant difference in RTs, but the error rate was still significantly higher for the consonants. Error rate and length correlated negatively for the vowels only. We conclude that RT differences between phonemes are independent of vocalic or consonantal status. Instead, we argue that the process of phoneme detection reflects more finely grained differences in acoustic/articulatory structure within the phonemic repertoire

    Phonetic content influences voice discriminability

    Get PDF
    We present results from an experiment which shows that voice perception is influenced by the phonetic content of speech. Dutch listeners were presented with thirteen speakers pronouncing CVC words with systematically varying segmental content, and they had to discriminate the speakers’ voices. Results show that certain segments help listeners discriminate voices more than other segments do. Voice information can be extracted from every segmental position of a monosyllabic word and is processed rapidly. We also show that although relative discriminability within a closed set of voices appears to be a stable property of a voice, it is also influenced by segmental cues – that is, perceived uniqueness of a voice depends on what that voice says

    Effects of phoneme repertoire on phoneme decision

    Get PDF
    In three experiments, listeners detected vowel or consonant targets in lists of CV syllables constructed from five vowels and five consonants. Responses were faster in a predictable context (e.g., listening for a vowel target in a list of syllables all beginning with the same consonant) than in an unpredictable context (e.g., listening for a vowel target in a list of syllables beginning with different consonants). In Experiment 1, the listeners’ native language was Dutch, in which vowel and consonant repertoires are similar in size. The difference between predictable and unpredictable contexts was comparable for vowel and consonant targets. In Experiments 2 and 3, the listeners’ native language was Spanish, which has four times as many consonants as vowels; here effects of an unpredictable consonant context on vowel detection were significantly greater than effects of an unpredictable vowel context on consonant detection. This finding suggests that listeners’ processing of phonemes takes into account the constitution of their language’s phonemic repertoire and the implications that this has for contextual variability

    The different time course of phonotactic constraint learning in children and adults : evidence from speech errors

    Get PDF
    Speech errors typically respect the speaker’s implicit knowledge of language-wide phonotactics (e.g., /Ƌ/ cannot be a syllable onset in the English language). Previous work demonstrated that adults can learn novel experimentally-induced phonotactic constraints by producing syllable strings in which the allowable position of a phoneme depends on another phoneme within the sequence (e.g., /t/ can only be an onset if the medial vowel is /i/), but not earlier than the second day of training. Thus far, no work has been done with children. In the current 4-day experiment, a group of Dutch-speaking adults and nine-year-old children were asked to rapidly recite sequences of novel word-forms (e.g., kieng nief siet hiem) that were consistent with phonotactics of the spoken Dutch language. Within the procedure of the experiment, some consonants (i.e., /t/ and /k/) were restricted to onset or coda position depending on the medial vowel (i.e., /i/ or “ie” versus /þː/ or “eu”). Speech errors in adults revealed a learning effect for the novel constraints on the second day of learning, consistent with earlier findings. A post-hoc analysis at trial-level showed that learning was statistically reliable after an exposure of 120 sequence-trials (including a consolidation period). In contrast, cChildren, however, started learning the constraints already on the first day. More precisely, the effect appeared significantly after an exposure of 24 sequences. These findings indicate that children are rapid implicit learners of novel phonotactics, which bears important implications for theorizing about developmental sensitivities in language learning
    • 

    corecore