58 research outputs found

    Suprathreshold auditory processes in listeners with normal audiograms but extended high-frequency hearing loss

    Get PDF
    Hearing loss in the extended high-frequency (EHF) range (\u3e8 kHz) is widespread among young normal-hearing adults and could have perceptual consequences such as difficulty understanding speech in noise. However, it is unclear how EHF hearing loss might affect basic psychoacoustic processes. The hypothesis that EHF hearing loss is associated with poorer auditory resolution in the standard frequencies was tested. Temporal resolution was characterized by amplitude modulation detection thresholds (AMDTs), and spectral resolution was characterized by frequency change detection thresholds (FCDTs). AMDTs and FCDTs were measured in adults with or without EHF loss but with normal clinical audiograms. AMDTs were measured with 0.5- and 4-kHz carrier frequencies; similarly, FCDTs were measured for 0.5- and 4-kHz base frequencies. AMDTs were significantly higher with the 4 kHz than the 0.5 kHz carrier, but there was no significant effect of EHF loss. There was no significant effect of EHF loss on FCDTs at 0.5 kHz; however, FCDTs were significantly higher at 4 kHz for listeners with than without EHF loss. This suggests that some aspects of auditory resolution in the standard audiometric frequency range may be compromised in listeners with EHF hearing loss despite having a normal audiogram

    Effects of Training on Lateralization for Simulations of Cochlear Implants and Single-Sided Deafness

    Get PDF
    While cochlear implantation has benefitted many patients with single-sided deafness (SSD), there is great variability in cochlear implant (CI) outcomes and binaural performance remains poorer than that of normal-hearing (NH) listeners. Differences in sound quality across ears—temporal fine structure (TFS) information with acoustic hearing vs. coarse spectro-temporal envelope information with electric hearing—may limit integration of acoustic and electric patterns. Binaural performance may also be limited by inter-aural mismatch between the acoustic input frequency and the place of stimulation in the cochlea. SSD CI patients must learn to accommodate these differences between acoustic and electric stimulation to maximize binaural performance. It is possible that training may increase and/or accelerate accommodation and further improve binaural performance. In this study, we evaluated lateralization training in NH subjects listening to broad simulations of SSD CI signal processing. A 16-channel vocoder was used to simulate the coarse spectro-temporal cues available with electric hearing; the degree of inter-aural mismatch was varied by adjusting the simulated insertion depth (SID) to be 25 mm (SID25), 22 mm (SID22) and 19 mm (SID19) from the base of the cochlea. Lateralization was measured using headphones and head-related transfer functions (HRTFs). Baseline lateralization was measured for unprocessed speech (UN) delivered to the left ear to simulate SSD and for binaural performance with the acoustic ear combined with the 16-channel vocoders (UN+SID25, UN+SID22 and UN+SID19). After completing baseline measurements, subjects completed six lateralization training exercises with the UN+SID22 condition, after which performance was re-measured for all baseline conditions. Post-training performance was significantly better than baseline for all conditions (p < 0.05 in all cases), with no significant difference in training benefits among conditions. Given that there was no significant difference between the SSD and the SSD CI conditions before or after training, the results suggest that NH listeners were unable to integrate TFS and coarse spectro-temporal cues across ears for lateralization, and that inter-aural mismatch played a secondary role at best. While lateralization training may benefit SSD CI patients, the training may largely improve spectral analysis with the acoustic ear alone, rather than improve integration of acoustic and electric hearing

    The effects of short-term training for spectrally mismatched noise-band speech

    No full text
    The present study examined the effects of short-term perceptual training on normal-hearing listeners' ability to adapt to spectrally altered speech patterns. Using noise-band vocoder processing, acoustic information was spectrally distorted by shifting speech information from one frequency region to another. Six subjects were tested with spectrally shifted sentences after five days of practice with upwardly shifted training sentences. Training with upwardly shifted sentences significantly improved recognition of upwardly shifted speech; recognition of downwardly shifted speech was nearly unchanged. Three subjects were later trained with downwardly shifted speech. Results showed that the mean improvement was comparable to that observed with the upwardly shifted training. In this retrain and retest condition, performance was largely unchanged for upwardly shifted sentence recognition, suggesting that these listeners had retained some of the improved speech perception resulting from the previous training. The results suggest that listeners are able to partially adapt to a spectral shift in acoustic speech patterns over the short-term, given sufficient training. However, the improvement was localized to where the spectral shift was trained, as no change in performance was observed for spectrally altered speech outside of the trained regions

    Minimal effects of visual memory training on auditory performance of adult cochlear implant users

    No full text

    Multichannel MDTs for individual CI subjects.

    No full text
    <p>From top to bottom, the panels show 10-Hz MDTs at 25 LL, 100-Hz MDTs at 25 LL, 10-Hz MDTs at 50 LL, 100-Hz MDTs at 50 LL, respectively. The black bars show the MDTs for the 4-channel loudness-balanced stimuli (i.e., equally loud as the single-channel stimuli in <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0099338#pone-0099338-g001" target="_blank">Fig. 1</a>) and the gray bars show MDTs for the 4-channel stimuli without loudness-balancing (i.e., louder than the single-channel stimuli in <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0099338#pone-0099338-g001" target="_blank">Fig. 1</a> and the 4-channel loudness-balanced stimuli). The error bars show the standard error.</p

    Single-channel MDTs for individual CI subjects.

    No full text
    <p>From top to bottom, the panels show 10-Hz MDTs at 25 LL, 100-Hz MDTs at 25 LL, 10-Hz MDTs at 50 LL, 100-Hz MDTs at 50 LL, respectively. The shaded bars show MDTs for the A, B, C, and D channels, respectively; the electrode-channel assignments are shown for each subject in <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0099338#pone-0099338-t001" target="_blank">Table 1</a>. The error bars show the standard error.</p
    • …
    corecore