40 research outputs found

    Age-Related Temporal Processing Deficits in Word Segments in Adult Cochlear-Implant Users

    Get PDF
    Partial funding for Open Access provided by the UMD Libraries' Open Access Publishing Fund.Aging may limit speech understanding outcomes in cochlear-implant (CI) users. Here, we examined age-related declines in auditory temporal processing as a potential mechanism that underlies speech understanding deficits associated with aging in CI users. Auditory temporal processing was assessed with a categorization task for the words dish and ditch (i.e., identify each token as the word dish or ditch) on a continuum of speech tokens with varying silence duration (0 to 60 ms) prior to the final fricative. In Experiments 1 and 2, younger CI (YCI), middle-aged CI (MCI), and older CI (OCI) users participated in the categorization task across a range of presentation levels (25 to 85 dB). Relative to YCI, OCI required longer silence durations to identify ditch and exhibited reduced ability to distinguish the words dish and ditch (shallower slopes in the categorization function). Critically, we observed age-related performance differences only at higher presentation levels. This contrasted with findings from normal-hearing listeners in Experiment 3 that demonstrated age-related performance differences independent of presentation level. In summary, aging in CI users appears to degrade the ability to utilize brief temporal cues in word identification, particularly at high levels. Age-specific CI programming may potentially improve clinical outcomes for speech understanding performance by older CI listeners

    ILD Binaural Interference and Aging

    No full text
    Age-related changes in interaural-level-difference-based across-frequency binaural interferenc

    USING MACHINE LEARNING TO UNDERSTAND THE RELATIONSHIPS BETWEEN AUDIOMETRIC DATA, SPEECH PERCEPTION, TEMPORAL PROCESSING, AND COGNITION

    No full text
    Aging and hearing loss cause communication difficulties, particularly for speech perception in demanding situations, which have been associated with factors including cognitive processing and extended high-frequency (>8 kHz) hearing. Quantifying such associations and finding other (possibly unintuitive) associations is well suited to machine learning. We constructed ensemble models for 443 participants who varied in age and hearing loss. Audiometric, perceptual, electrophysiological, and cognitive data were used to predict speech perception in noise, reverberation, and with time compression. Speech perception was best predicted by variables associated with audiometric thresholds (including new across-frequency composite variables) between 1–4 kHz, followed by basic temporal processing ability. Cognitive factors and extended high-frequency thresholds had little to no predictive ability of speech perception. Future associations or lack thereof will inform the field as we attempt to better understand the intertwined effects of speech perception, aging, hearing loss, and cognition

    How (not) to vocode: Using channel vocoders for cochlear-implant research

    No full text
    The channel vocoder has become a useful tool to understand the impact of specific forms of auditory degradation---particularly the spectral and temporal degradation that reflect cochlear implant processing. Vocoders have many parameters that allow researchers to answer questions about cochlear implant processing in ways that overcome some logistical complications of controlling for factors in individual cochlear implant users. However, there is such a large variety in the implementation of vocoders that the term "vocoder" is not specific enough to describe the signal processing used in these experiments. Misunderstanding vocoder parameters can result in experimental confounds or unexpected stimulus distortions. This paper will highlight the signal processing parameters that should be specified when describing vocoder construction. The paper also provides guidance on how to determine vocoder parameters within perception experiments given the experimenter's goals and research questions to avoid common signal processing mistakes. Throughout, we will take the assumption that experimenters are interested in vocoders with the specific goal of better understanding cochlear implants

    Recognition of vocoded words and sentences in quiet and multi-talker babble with children and adults.

    No full text
    A vocoder is used to simulate cochlear-implant sound processing in normal-hearing listeners. Typically, there is rapid improvement in vocoded speech recognition, but it is unclear if the improvement rate differs across age groups and speech materials. Children (8-10 years) and young adults (18-26 years) were trained and tested over 2 days (4 hours) on recognition of eight-channel noise-vocoded words and sentences, in quiet and in the presence of multi-talker babble at signal-to-noise ratios of 0, +5, and +10 dB. Children achieved poorer performance than adults in all conditions, for both word and sentence recognition. With training, vocoded speech recognition improvement rates were not significantly different between children and adults, suggesting that improvement in learning how to process speech cues degraded via vocoding is absent of developmental differences across these age groups and types of speech materials. Furthermore, this result confirms that the acutely measured age difference in vocoded speech recognition persists after extended training
    corecore