13 research outputs found

    Across-talker effects on non-native listeners' vowel perecption in noise

    Get PDF
    Journal ArticleThis study explored how across-talker differences influence non-native vowel perception. American English (AE) and Korean listeners were presented with recordings of 10 AE vowels in /bVd/ context. The stimuli were mixed with noise and presented for identification in a 10-alternative forced-choice task. The two listener groups heard recordings of the vowels produced by 10 talkers at three signal-to-noise ratios. Overall the AE listeners identified the vowels 22% more accurately than the Korean listeners. There was a wide range of identification accuracy scores across talkers for both AE and Korean listeners. At each signal-to-noise ratio, the across-talker intelligibility scores were highly correlated for AE and Korean listeners. Acoustic analysis was conducted for 2 vowel pairs that exhibited variable accuracy across talkers for Korean listeners but high identification accuracy for AE listeners. Results demonstrated that Korean listeners? error patterns for these four vowels were strongly influenced by variability in vowel production that was within the normal range for AE talkers. These results suggest that non-native listeners are strongly influenced by across-talker variability perhaps because of the difficulty they have forming native-like vowel categories

    Judgment of emotion in speech: Three-factor bar graphs for emotion categories "fear," "disgust," "sadness," "happiness," and "neutral" (Morgan & Ferguson, 2017)

    No full text
    <div>"Communication partners of individuals with hearing loss are often counseled and trained to 'speak clearly' to help their partners receive this intelligibility benefit and improve communication. However, audiology patients sometimes complain anecdotally of communication partners sounding angry when in fact these communication partners are only trying to speak clearly. In the present study, we investigated the basis for these complaints by examining whether listeners perceive anger or other emotional content when they listen to speech that has been spoken clearly."</div><div><br></div><div>In this study, the authors investigated the emotion perceived by young listeners with normal hearing (YNH listeners) and older adults with hearing impairment (OHI listeners) when listening to speech produced conversationally or in a clear speaking style. </div><div><br></div><div>In the article, the authors focus primarily on speech receiving a judgment of "anger" from the participants. Here, they present the results of other categories of emotion that were included: "fear" (<b>S1</b>), "disgust" (<b>S2</b>), "sadness" (<b>S3</b>), "happiness" (<b>S4</b>), and "neutral" (<b>S5</b>). </div><div><br></div><div>Morgan, S. D., & Ferguson, S. H. (2017). Judgments of emotion in clear and conversational speech by young adults with normal hearing and older adults with hearing impairment. <i>Journal of Speech, Language, and Hearing Research, 60, </i>2271–2280<i>.</i> https://doi.org/10.1044/2017_JSLHR-H-16-0264</div

    Perceived Anger in Clear and Conversational Speech: Contributions of Age and Hearing Loss

    No full text
    A previous investigation demonstrated differences between younger adult normal-hearing listeners and older adult hearing-impaired listeners in the perceived emotion of clear and conversational speech. Specifically, clear speech sounded angry more often than conversational speech for both groups, but the effect was smaller for the older listeners. These listener groups differed by two confounding factors, age (younger vs. older adults) and hearing status (normal vs. impaired). The objective of the present study was to evaluate the contributions of aging and hearing loss to the reduced perception of anger in older adults with hearing loss. We investigated perceived anger in clear and conversational speech in younger adults with and without a simulated age-related hearing loss, and in older adults with normal hearing. Younger adults with simulated hearing loss performed similarly to normal-hearing peers, while normal-hearing older adults performed similarly to hearing-impaired peers, suggesting that aging was the primary contributor to the decreased anger perception seen in previous work. These findings confirm reduced anger perception for older adults compared to younger adults, though the significant speaking style effect&mdash;regardless of age and hearing status&mdash;highlights the need to identify methods of producing clear speech that is emotionally neutral or positive

    Acoustic correlates of vowel intelligibility in clear and conversational speech for young normal-hearing and elderly hearing-impaired listeners

    No full text
    The present investigation carried out acoustic analyses of vowels in clear and conversational speech produced by 41 talkers. Mixed-effects models were then deployed to examine relationships among acoustic and perceptual data for these vowels. Acoustic data include vowel duration, steady-state formant frequencies, and two measures of dynamic formant movement. Perceptual data consist of vowel intelligibility in noise for young normal-hearing and elderly hearing-impaired listeners, as reported by Ferguson in 2004 and 2012 [J. Acoust. Soc. Am. 116, 2365–2373 (2004); J. Speech Lang. Hear. Res. 55, 779–790 (2012)], respectively. Significant clear speech effects were observed for all acoustic metrics, although not all measures changed for all vowels and considerable talker variability was observed. Mixed-effects analyses revealed that the contribution of duration and steady-state formant information to vowel intelligibility differed for the two listener groups. This outcome is consistent with earlier research suggesting that hearing loss, and possibly aging, alters the way acoustic cues are used for identifying vowels

    Across-talker effects on non-native listeners’ vowel perception in noise1

    No full text
    This study explored how across-talker differences influence non-native vowel perception. American English (AE) and Korean listeners were presented with recordings of 10 AE vowels in ∕bVd∕ context. The stimuli were mixed with noise and presented for identification in a 10-alternative forced-choice task. The two listener groups heard recordings of the vowels produced by 10 talkers at three signal-to-noise ratios. Overall the AE listeners identified the vowels 22% more accurately than the Korean listeners. There was a wide range of identification accuracy scores across talkers for both AE and Korean listeners. At each signal-to-noise ratio, the across-talker intelligibility scores were highly correlated for AE and Korean listeners. Acoustic analysis was conducted for 2 vowel pairs that exhibited variable accuracy across talkers for Korean listeners but high identification accuracy for AE listeners. Results demonstrated that Korean listeners’ error patterns for these four vowels were strongly influenced by variability in vowel production that was within the normal range for AE talkers. These results suggest that non-native listeners are strongly influenced by across-talker variability perhaps because of the difficulty they have forming native-like vowel categories
    corecore