25 research outputs found

    The perceptual distance between vowels: the effects of prototypicality and extremity

    Get PDF

    Separability of prosodic phrase boundary and phonemic information

    Get PDF

    The VOT category boundary in word-initial stops: Counter-evidence against rate normalization in English spontaneous speech

    Get PDF
    Some languages, such as many varieties of English, use short-lag and long-lag VOT to distinguish word- and syllable- initial voiced vs. voiceless stop phonemes. According to a popular view, the optimal category boundary location between the two types of stops moves towards larger values as articulation rate becomes slower (and speech segments longer), and listeners accordingly shift the perceptual VOT category boundary. According to an alternative view, listeners need not shift the category boundary with a change in articulation rate, because the same VOT category boundary location remains optimal across articulation rates in normal speech, although a shift in optimal boundary location can be induced in the laboratory by instructing speakers to use artificially extreme articulation rates. In this paper we applied rate-independent VOT category boundaries to word-initial stop phonemes in spontaneous English speech data, and compared their effectiveness against that of Miller, Green and Reeves's (1986) rate-dependent VOT category boundary applied to laboratory speech. The classification accuracies of the two types of category boundaries were comparable, when factors other than articulation rate are controlled, suggesting that perceptual VOT category boundaries need not shift with a change in articulation rate under normal circumstances. For example, Optimal VOT category boundary locations for homorganic word-initial stops differed considerably depending on the following vowel, however, when boundary location was assumed to be affected by the relative frequency of voiced vs. voiceless categories in each vowel context.casl7pub4266pub

    A prerequisite to L1 homophone effects in L2 spoken-word recognition

    Get PDF
    When both members of a phonemic contrast in L2 (second language) are perceptually mapped to a single phoneme in one’s L1 (first language), L2 words containing a member of that contrast can spuriously activate L2 words in spoken-word recognition. For example, upon hearing cattle, Dutch speakers of English are reported to experience activation of kettle, as L1 Dutch speakers perceptually map the vowel in the two English words to a single vowel phoneme in their L1. In an auditory word-learning experiment using Greek and Japanese speakers of English, we asked whether such cross-lexical activation in L2 spoken-word recognition necessarily involves inaccurate perception by the L2 listeners, or can also arise from interference from L1 phonology at an abstract level, independent of the listeners’ phonetic processing abilities. Results suggest that spurious activation of L2 words containing L2-specific contrasts in spoken-word recognition is contingent on the L2 listeners’ inadequate phonetic processing abilities

    F1/F2 targets for Finnish single vs. double vowels

    Get PDF
    This paper explores the reason why Finnish single (short) vowels tend to occupy less peripheral positions in the F1/F2 vowel space compared to their double (long) counterparts. The results of two production studies suggest that the less extreme vowel quality of single vowels is best described as arising from undershoot of articulatory/acoustic targets due to their short durations, assuming single, context-free targets for phonemes.caslpub3942pub47

    Helping children learn non-native articulations: The implications for ultrasound-based clinical intervention

    Get PDF
    An increasing number of studies are examining the effectiveness of ultrasound as a visual biofeedback device for speech production training or therapy. However, no randomised control trials exist. We compared the success of typically-developing children learning new articulations with and without ultrasound biofeedback. Thirty children aged 6-12 were randomly assigned to 2 groups: Group U were taught novel (non-English) consonants and vowels using ultrasound in addition to imitation, modelling, articulatory descriptions and feedback on performance. Group A were taught the same speech sounds, using the same methods but in the absence of ultrasound visual biofeedback. Results showed that both groups of children improved in their production of the novel sounds with the exception of the high back vowels [u,]. No advantage for Group U was found, except for the palatal stop [c].https://www.internationalphoneticassociation.org/icphs/icphs2015caslpub3962pub69

    Viewing speech in action: speech articulation videos in the public domain that demonstrate the sounds of the International Phonetic Alphabet (IPA)

    Get PDF
    In this article, we introduce recently released, publicly available resources, which allow users to watch videos of hidden articulators (e.g. the tongue) during the production of various types of sounds found in the world’s languages. The articulation videos on these resources are linked to a clickable International Phonetic Alphabet chart ([International Phonetic Association. 1999. Handbook of the International Phonetic Association: A Guide to the Use of the International Phonetic Alphabet. Cambridge: Cambridge University Press]), so that the user can study the articulations of different types of speech sounds systematically. We discuss the utility of these resources for teaching the pronunciation of contrastive sounds in a foreign language that are absent in the learner’s native language

    Onset vs. Coda Asymmetry in the Articulation of English /r/

    Get PDF
    We describe an asymmetric categorical pattern of onset-coda allophony for English /r/, the post-alveolar rhotic approximant, drawing on published and unpublished information on over 100 child, teenage and adult speakers from prior studies. Around two thirds of the speakers exhibited allophonic variation that was subtle: onset and coda /r/ were typically both bunched (BB), or both tip-raised (RR), with minor within speaker differences. The other third had a more radical categorical allophonic pattern, using both R and B types. Such variable speakers had R onsets and B codas (RB): but the opposite pattern of allophony (BR) was extremely rare. This raises questions as to whether the asymmetry is accidental or motivated by models of syllable structure phonetic implementation.https://www.internationalphoneticassociation.org/icphs/icphs2015caslpub3687pub70

    Combined Loss of JMJD1A and JMJD1B Reveals Critical Roles for H3K9 Demethylation in the Maintenance of Embryonic Stem Cells and Early Embryogenesis

    Get PDF
    Histone H3 lysine 9 (H3K9) methylation is unevenly distributed in mammalian chromosomes. However, the molecular mechanism controlling the uneven distribution and its biological significance remain to be elucidated. Here, we show that JMJD1A and JMJD1B preferentially target H3K9 demethylation of gene-dense regions of chromosomes, thereby establishing an H3K9 hypomethylation state in euchromatin. JMJD1A/JMJD1B-deficient embryos died soon after implantation accompanying epiblast cell death. Furthermore, combined loss of JMJD1A and JMJD1B caused perturbed expression of metabolic genes and rapid cell death in embryonic stem cells (ESCs). These results indicate that JMJD1A/JMJD1B-meditated H3K9 demethylation has critical roles for early embryogenesis and ESC maintenance. Finally, genetic rescue experiments clarified that H3K9 overmethylation by G9A was the cause of the cell death and perturbed gene expression of JMJD1A/JMJD1B-depleted ESCs. We summarized that JMJD1A and JMJD1B, in combination, ensure early embryogenesis and ESC viability by establishing the correct H3K9 methylated epigenome

    An explanation for phonological word-final vowel shortening: Evidence from Tokyo Japanese

    No full text
    This paper offers an account for the cross-linguistic prevalence of phonological word-final vowel shortening, in the face of phonetic final lengthening, also commonly observed across languages. Two contributing factors are hypothesized: (1) an overlap in the durational distributions of short and long vowel phonemes across positions in the utterance can lead to the misidentification of phonemic vowel length and (2) the direction of bias in such misidentification is determined by the distributional properties of the short and long vowel phonemes in the region of the durational overlap. Because short vowel phonemes are typically more frequent in occurrence and less variable in duration than long vowel phonemes, long vowel phonemes are more likely to be misidentified than short vowel phonemes. Results of production and perception studies in Tokyo Japanese support these hypotheses
    corecore