210 research outputs found

    Phonological effects on the perceptual weighting of voice cues for voice gender categorization

    Get PDF
    Voice perception and speaker identification interact with linguistic processing. This study investigated whether lexicality and/or phonological effects alter the perceptual weighting of voice pitch (F0) and vocal-tract length (VTL) cues for perceived voice gender categorization. F0 and VTL of forward words and nonwords (for lexicality effect), and time-reversed nonwords (for phonological effect through phonetic alterations) were manipulated. Participants provided binary “man”/“woman” judgements of the different voice conditions. Cue weights for time-reversed nonwords were significantly lower than cue weights for both forward words and nonwords, but there was no significant difference between forward words and nonwords. Hence, voice cue utilization for voice gender judgements seems to be affected by phonological, rather than lexicality effects.</p

    Differences in Processing Speech-on-Speech Between Musicians and Non-musicians: The Role of Prosodic Cues.

    Get PDF
    In the current study, we investigate the role of prosodic cues in speech-on-speech perception in musicians and non-musicians. Earlier studies have shown that musically experienced listeners may have an advantage in speech-on-speech performance in behavioral tasks (1,2). Previously, we have also shown in an eye-tracking study that musical experience has an effect on the timing of resolution of lexical competition when processing quiet vs masked speech (3). In particular, musicians were faster in lexical decision-making when a two-talker masker was added to target speech. However, the source of the difference observed between groups remained unclear. In the current study, by employing a visual world paradigm, we aim to clarify whether musicians make use of durational cues that contribute to prosodic boundaries in Dutch, in resolving lexical competition when processing quiet vs two-talker masked speech. If musical training preserves listeners' sensitivity to the acoustic correlates of prosodic boundaries when processing masked speech, we expect to observe more lexical competition and delayed lexical resolution in musicians. We will compare gaze-tracking and pupil data of both groups across conditions

    Measure and model of vocal-tract length discrimination in cochlear implants

    Get PDF
    Voice discrimination is crucial to selectively listen to a particular talker in a crowded environment. In normalhearing listeners, it strongly relies on the perception of two dimensions: the fundamental frequency and the vocal-tract length. Yet, very little is known about the perception of the latter in cochlear implants. The present study reports discrimination thresholds for vocal-tract length in normal-hearing listeners and cochlear-implant users. The behavioral results were then used to determine the effective spectral resolution in a model of electric hearing: effective resolution in the implant was found to be poorer than previously suggested by psychophysical measurements. Such a model could be used for clinical purposes, or to facilitate the development of new strategies.</p

    Are musicians at an advantage when processing speech on speech?

    Get PDF

    The relation between speaking-style categorization and speech recognition in adult cochlear implant users

    Get PDF
    The current study examined the relation between speaking-style categorization and speech recognition in post-lingually deafened adult cochlear implant users and normal-hearing listeners tested under 4- and 8-channel acoustic noise-vocoder cochlear implant simulations. Across all listeners, better speaking-style categorization of careful read and casual conversation speech was associated with more accurate recognition of speech across those same two speaking styles. Findings suggest that some cochlear implant users and normal-hearing listeners under cochlear implant simulation may benefit from stronger encoding of indexical information in speech, enabling both better categorization and recognition of speech produced in different speaking styles.</p

    Are musicians at an advantage when processing speech on speech?

    Get PDF

    Are musicians at an advantage when processing speech on speech?

    Get PDF
    Several studies have shown that musicians may have an advantage in a variety of auditory tasks, including speech in noise perception. The current study explores whether musical training enhances understanding two-talker masked speech. By combining an off-line and an on-line measure of speech perception, we investigated how automatic processes can contribute to the potential perceptual advantage of musicians. Understanding the underlying mechanisms for how musical training may lead to a benefit in speech-in-noise perception could help clinicians in developing ways to use music as a means to improve speech processing of hearing impaired individuals

    Talker variability in word recognition under cochlear implant simulation:Does talker gender matter?

    Get PDF
    Normal-hearing listeners are less accurate and slower to recognize words with trial-to-trial talker changes compared to a repeating talker. Cochlear implant (CI) users demonstrate poor discrimination of same-gender talkers and, to a lesser extent, different-gender talkers, which could affect word recognition. The effects of talker voice differences on word recognition were investigated using acoustic noise-vocoder simulations of CI hearing. Word recognition accuracy was lower for multiple female and male talkers, compared to multiple female talkers or a single talker. Results suggest that talker variability has a detrimental effect on word recognition accuracy under CI simulation, but only with different-gender talkers

    Individual differences in top-down restoration of interrupted speech:Links to linguistic and cognitive abilities

    Get PDF
    Top-down restoration mechanisms can enhance perception of degraded speech. Even in normal hearing, however, a large variability has been observed in how effectively individuals can benefit from these mechanisms. To investigate if this variability is partially caused by individuals' linguistic and cognitive skills, normal-hearing participants of varying ages were assessed for receptive vocabulary (Peabody Picture Vocabulary Test; PPVT-III-NL), for full-scale intelligence (Wechsler Adult Intelligence Scale; WAIS-IV-NL), and for top-down restoration of interrupted speech (with silent or noise-filled gaps). Receptive vocabulary was significantly correlated with the other measures, suggesting linguistic skills to be highly involved in restoration of degraded speech. (C) 2014 Acoustical Society of Americ

    Eyes on Emotion:Dynamic Gaze Allocation During Emotion Perception From Speech-Like Stimuli

    Get PDF
    The majority of emotional expressions used in daily communication are multimodal and dynamic in nature. Consequently, one would expect that human observers utilize specific perceptual strategies to process emotions and to handle the multimodal and dynamic nature of emotions. However, our present knowledge on these strategies is scarce, primarily because most studies on emotion perception have not fully covered this variation, and instead used static and/or unimodal stimuli with few emotion categories. To resolve this knowledge gap, the present study examined how dynamic emotional auditory and visual information is integrated into a unified percept. Since there is a broad spectrum of possible forms of integration, both eye movements and accuracy of emotion identification were evaluated while observers performed an emotion identification task in one of three conditions: audio-only, visual-only video, or audiovisual video. In terms of adaptations of perceptual strategies, eye movement results showed a shift in fixations toward the eyes and away from the nose and mouth when audio is added. Notably, in terms of task performance, audio-only performance was mostly significantly worse than video-only and audiovisual performances, but performance in the latter two conditions was often not different. These results suggest that individuals flexibly and momentarily adapt their perceptual strategies to changes in the available information for emotion recognition, and these changes can be comprehensively quantified with eye tracking
    corecore