22 research outputs found

    The perception of consonants in reverberation and noise by adults fitted with bimodal devices

    Get PDF
    Purpose The purpose of this study was to evaluate the contribution of a contralateral hearing aid (HA) to the perception of consonants, in terms of voicing, manner, and place of articulation cues in reverberation and noise by adult cochlear implantees aided by bimodal fittings. Method Eight post-lingually deafened adult cochlear implant listeners with a fully inserted cochlear implant in one ear and low-frequency hearing in the other ear were tested on consonant perception. The subjects were presented with consonant stimuli processed in the following experimental conditions: one quiet condition, two different reverberation times (0.3 s and 1.0 s), and the combination of two reverberation times with a single signal-to-noise ratio (SNR = 5 dB). Results Consonant perception improved significantly when listening in combination with a contralateral hearing aid (HA) as opposed to listening with a cochlear implant (CI) alone in 0.3 s and 1.0 s of reverberation. Significantly higher scores were also noted when noise was added to 0.3 s of reverberation. Conclusion A considerable benefit was noted from the additional acoustic information in conditions of reverberation and reverberation plus noise. The bimodal benefit observed was more pronounced for voicing and manner of articulation than for place of articulation

    Multi-microphone adaptive noise reduction strategies for coordinated stimulation in bilateral cochlear implant devices

    Get PDF
    This is the published version, also available here: http://dx.doi.org/10.1121/1.3372727.Bilateral cochlear implant (BI-CI) recipients achieve high word recognition scores in quiet listening conditions. Still, there is a substantial drop in speech recognition performance when there is reverberation and more than one interferers. BI-CI users utilize information from just two directional microphones placed on opposite sides of the head in a so-called independent stimulation mode. To enhance the ability of BI-CI users to communicate in noise, the use of two computationally inexpensive multi-microphone adaptive noise reduction strategies exploiting information simultaneously collected by the microphones associated with two behind-the-ear (BTE) processors (one per ear) is proposed. To this end, as many as four microphones are employed (two omni-directional and two directional) in each of the two BTE processors (one per ear). In the proposed two-microphone binaural strategies, all four microphones (two behind each ear) are being used in a coordinated stimulation mode. The hypothesis is that such strategies combine spatial information from all microphones to form a better representation of the target than that made available with only a single input. Speech intelligibility is assessed in BI-CI listeners using IEEE sentences corrupted by up to three steady speech-shaped noise sources. Results indicate that multi-microphone strategies improve speech understanding in single- and multi-noise source scenarios

    Effects of early and late reflections on intelligibility of reverberated speech by cochlear implant listeners

    Get PDF
    This is the published version, also available here: http://dx.doi.org/10.1121/1.4834455.The purpose of this study was to determine the overall impact of early and late reflections on the intelligibility of reverberated speech by cochlear implant listeners. Two specific reverberation times were assessed. For each reverberation time, sentences were presented in three different conditions wherein the target signal was filtered through the early, late or entire part of the acoustic impulse response. Results obtained with seven cochlear implant listeners indicated that while early reflections neither enhanced nor reduced overall speech perception performance, late reflections severely reduced speech intelligibility in both reverberant conditions tested

    A channel-selection criterion for suppressing reverberation in cochlear implants

    Get PDF
    This is the published version, also available here: http://dx.doi.org/10.1121/1.3559683.Little is known about the extent to which reverberation affects speech intelligibility by cochlear implant (CI) listeners. Experiment 1 assessed CI users’ performance using Institute of Electrical and Electronics Engineers (IEEE) sentences corrupted with varying degrees of reverberation. Reverberation times of 0.30, 0.60, 0.80, and 1.0 s were used. Results indicated that for all subjects tested, speech intelligibility decreased exponentially with an increase in reverberation time. A decaying-exponential model provided an excellent fit to the data. Experiment 2 evaluated (offline) a speech coding strategy for reverberation suppression using a channel-selection criterion based on the signal-to-reverberant ratio (SRR) of individual frequency channels. The SRR reflects implicitly the ratio of the energies of the signal originating from the early (and direct) reflections and the signal originating from the late reflections. Channels with SRR larger than a preset threshold were selected, while channels with SRR smaller than the threshold were zeroed out. Results in a highly reverberant scenario indicated that the proposed strategy led to substantial gains (over 60 percentage points) in speech intelligibility over the subjects’ daily strategy. Further analysis indicated that the proposed channel-selection criterion reduces the temporal envelope smearing effects introduced by reverberation and also diminishes the self-masking effects responsible for flattened formants

    Spatially separating language masker from target results in spatial and linguistic masking release

    Get PDF
    Several studies demonstrate that in complex auditory scenes, speech recognition is improved when the competing background and target speech differ linguistically. However, such studies typically utilize spatially co-located speech sources which may not fully capture typical listening conditions. Furthermore, co-located presentation may overestimate the observed benefit of linguistic dissimilarity. The current study examines the effect of spatial separation on linguistic release from masking. Results demonstrate that linguistic release from masking does extend to spatially separated sources. The overall magnitude of the observed effect, however, appears to be diminished relative to the co-located presentation conditions

    Binaural advantages in users of bimodal and bilateral cochlear implant devices

    Get PDF
    This is the published version, also available here: http://dx.doi.org/10.1121/1.4831955.This paper investigates to what extent users of bilateral and bimodal fittings should expect to benefit from all three different binaural advantages found to be present in normal-hearing listeners. Head-shadow and binaural squelch are advantages occurring under spatially separated speech and noise, while summation emerges when speech and noise coincide in space. For 14 bilateral or bimodal listeners, speech reception thresholds in the presence of four-talker babble were measured in sound-field under various speech and noise configurations. Statistical analysis revealed significant advantages of head-shadow and summation for both bilateral and bimodal listeners. Squelch was significant only for bimodal listeners

    Classification of fricative consonants for speech enhancement in hearing devices.

    Get PDF
    To investigate a set of acoustic features and classification methods for the classification of three groups of fricative consonants differing in place of articulation.A support vector machine (SVM) algorithm was used to classify the fricatives extracted from the TIMIT database in quiet and also in speech babble noise at various signal-to-noise ratios (SNRs). Spectral features including four spectral moments, peak, slope, Mel-frequency cepstral coefficients (MFCC), Gammatone filters outputs, and magnitudes of fast Fourier Transform (FFT) spectrum were used for the classification. The analysis frame was restricted to only 8 msec. In addition, commonly-used linear and nonlinear principal component analysis dimensionality reduction techniques that project a high-dimensional feature vector onto a lower dimensional space were examined.With 13 MFCC coefficients, 14 or 24 Gammatone filter outputs, classification performance was greater than or equal to 85% in quiet and at +10 dB SNR. Using 14 Gammatone filter outputs above 1 kHz, classification accuracy remained high (greater than 80%) for a wide range of SNRs from +20 to +5 dB SNR.High levels of classification accuracy for fricative consonants in quiet and in noise could be achieved using only spectral features extracted from a short time window. Results of this work have a direct impact on the development of speech enhancement algorithms for hearing devices
    corecore