204 research outputs found

    Patterns of variability in voice onset time: a developmental study of motor speech skills in humans

    Get PDF
    This study investigated the developmental patterns of variability in the speech parameter voice onset time (VOT) in forty six children. Five groups of children participated in the study as follows: i) Group 1 - aged 5 years 8 months (n=6); ii) Group 2 - 7 years 10 months (n=10); iii) Group 3 - 9 years 10 months (n=10); iv) Group 4 - 11 years 10 months (n=10), and v) Group 5 - 13 years 2 months (n=10). Coefficient of variation (COV) values were examined for the VOT values of both "voiceless" (/p t k/) and "voiced" (/b d g/) plosives to determine patterns of variability. Significant effects of age were revealed for both the voiceless and voiced plosives, and levels of variability leveled off for Group 4. The data suggest that although variability in VOT decreases with age, the presence of residual variability may be a prerequisite for the further refinement of motor speech skills

    Acoustic characteristics of English fricatives

    Get PDF
    This is the publisher's version, also available electronically from http://scitation.aip.org/content/asa/journal/jasa/108/3/10.1121/1.1288413.This study constitutes a large-scale comparative analysis of acoustic cues for classification of place of articulation in fricatives. To date, no single metric has been found to classify fricative place of articulation with a high degree of accuracy. This study presents spectral, amplitudinal, and temporal measurements that involve both static properties(spectral peak location, spectral moments, noise duration, normalized amplitude, and F2 onset frequency) and dynamic properties (relative amplitude and locus equations). While all cues (except locus equations) consistently serve to distinguish sibilant from nonsibilant fricatives, the present results indicate that spectral peak location, spectral moments, and both normalized and relative amplitude serve to distinguish all four places of fricative articulation. These findings suggest that these static and dynamic acoustic properties can provide robust and unique information about all four places of articulation, despite variation in speaker, vowel context, and voicing

    Pronunciation Instruction Can Improve L2 Learners\u27 Bottom-Up Processing for Listening

    Get PDF
    Listening is widely regarded as an important skill that is difficult and necessary to teach in L2 classrooms. Listening requires both top-down and bottom-up processing, yet pedagogical techniques for the latter are often lacking. This study explores the efficacy of pronunciation instruction (PI) for improving learners’ bottom-up processing. The study recruited 116 relatively novice learners of Spanish as a foreign language and provided the experimental groups with brief lessons in PI emphasizing segmental or suprasegmental features followed by production-focused or perception-focused practice. Learners’ bottom-up processing skill was assessed with a sentence-level dictation task. Learners given PI on suprasegmental features followed by perception-focused practice found target language speech to be more intelligible than controls, indicating that they had improved their bottom-up processing. However, learners given PI on segmental features followed by production-focused practice found target language speech to be more comprehensible. The results indicate that PI is a worthwhile intervention for reasons that go beyond pronunciation, even when instructional time is limited, and that a range of features and practice types should be included in PI to improve listening skills

    Perceptual distinctiveness between dental and palatal sibilants in different vowel contexts and its implications for phonological contrasts

    Get PDF
    Mandarin Chinese has dental, palatal, and retroflex sibilants, but their contrasts before [_i] are avoided: The palatals appear before [i] while the dentals and retroflexes appear before homorganic syllabic approximants (a.k.a. apical vowels). An enhancement view regards the apical vowels as a way to avoid the weak contrast /si-ɕi-ȿi/. We focus on the dental vs. palatal contrast in this study and test the enhancement-based hypothesis that the dental and palatal sibilants are perceptually less distinct in the [_i] context than in other vowel contexts. This hypothesis is supported by a typological survey of 155 Chinese dialects, which showed that contrastive [si, tsi, tsʰi] and [ɕi, tɕi, tɕʰi] tend to be avoided even when there are no retroflexes in the sound system. We also conducted a speeded-AX discrimination experiment with 20 English listeners and 10 Chinese listeners to examine the effect of vowels ([_i], [_a], [_ou]) on the perceived distinctiveness of sibilant contrasts ([s-ɕ], [ts-tɕ], [tsʰ-tɕʰ]). The results showed that the [_i] context introduced a longer response time, thus reduced distinctiveness, than other vowels, confirming our hypothesis. Moreover, the general lack of difference between the two groups of listeners indicates that the vowel effect is language-independent

    The listening talker: A review of human and algorithmic context-induced modifications of speech

    Get PDF
    International audienceSpeech output technology is finding widespread application, including in scenarios where intelligibility might be compromised - at least for some listeners - by adverse conditions. Unlike most current algorithms, talkers continually adapt their speech patterns as a response to the immediate context of spoken communication, where the type of interlocutor and the environment are the dominant situational factors influencing speech production. Observations of talker behaviour can motivate the design of more robust speech output algorithms. Starting with a listener-oriented categorisation of possible goals for speech modification, this review article summarises the extensive set of behavioural findings related to human speech modification, identifies which factors appear to be beneficial, and goes on to examine previous computational attempts to improve intelligibility in noise. The review concludes by tabulating 46 speech modifications, many of which have yet to be perceptually or algorithmically evaluated. Consequently, the review provides a roadmap for future work in improving the robustness of speech output

    Prevalence of age-related hearing loss in Europe: a review

    Get PDF
    Populations are becoming progressively older thus presenting symptoms of diminished organ function due to degenerative processes. These may be physiological or caused by additional factors damaging the organ. Presbyacusis refers to the physiological age-related changes of the peripheral and central auditory system leading to hearing impairment and difficulty understanding spoken language. In contrast to epidemiological data of other continents, the prevalence of age-related hearing loss (ARHL) in Europe is not well defined, due in part to the use of different classification systems. We performed a systematic literature review with the aim of gaining a picture of the prevalence of ARHL in Europe. The review included only population and epidemiological studies in English since 1970 with samples in European countries with subjects aged 60 years and above. Nineteen studies met our selection criteria and additional five studies reported self-reported hearing impairment. When these data were crudely averaged and interpolated, roughly 30% of men and 20% of women in Europe were found to have a hearing loss of 30 dB HL or more by age 70 years, and 55% of men and 45% of women by age 80 years. Apparent problems in comparing the available data were the heterogeneity of measures and cut-offs for grades of hearing impairment. Our systematic review of epidemiological data revealed more information gaps than information that would allow gaining a meaningful picture of prevalence of ARHL. The need for standardized procedures when collecting and reporting epidemiological data on hearing loss has become evident. Development of hearing loss over time in conjunction with the increase in life expectancy is a major factor determining strategies of detection and correction of ARHL. Thus, we recommend using the WHO classification of hearing loss strictly and including standard audiometric measures in population-based health surveys

    Subliminal Semantic Priming in Speech

    Get PDF
    Numerous studies have reported subliminal repetition and semantic priming in the visual modality. We transferred this paradigm to the auditory modality. Prime awareness was manipulated by a reduction of sound intensity level. Uncategorized prime words (according to a post-test) were followed by semantically related, unrelated, or repeated target words (presented without intensity reduction) and participants performed a lexical decision task (LDT). Participants with slower reaction times in the LDT showed semantic priming (faster reaction times for semantically related compared to unrelated targets) and negative repetition priming (slower reaction times for repeated compared to semantically related targets). This is the first report of semantic priming in the auditory modality without conscious categorization of the prime

    Subliminal Semantic Priming in Speech

    Get PDF
    Numerous studies have reported subliminal repetition and semantic priming in the visual modality. We transferred this paradigm to the auditory modality. Prime awareness was manipulated by a reduction of sound intensity level. Uncategorized prime words (according to a post-test) were followed by semantically related, unrelated, or repeated target words (presented without intensity reduction) and participants performed a lexical decision task (LDT). Participants with slower reaction times in the LDT showed semantic priming (faster reaction times for semantically related compared to unrelated targets) and negative repetition priming (slower reaction times for repeated compared to semantically related targets). This is the first report of semantic priming in the auditory modality without conscious categorization of the prime
    corecore