25 research outputs found

    Predictors of entering a hearing aid evaluation period: a prospective study in older hearing-help seekers

    Get PDF
    This study aimed to determine the predictors of entering a hearing aid evaluation period (HAEP) using a prospective design drawing on the health belief model and the transtheoretical model. In total, 377 older persons who presented with hearing problems to an ENT-specialist (n = 110) or a hearing aid dispenser (n = 267) filled in a baseline questionnaire. After four months, it was determined via a telephone interview whether or not participants had decided to enter a HAEP. Multivariable logistic regression analyses were applied to determine which baseline variables predicted HAEP status. A priori, candidate predictors were divided into ‘likely’ and ‘novel’ predictors based on the literature. The following variables turned out to be significant predictors: more expected hearing aid benefits, greater social pressure, and greater self-reported hearing disability. In addition, greater hearing loss severity and stigma were predictors in women but not in men. Of note, the predictive effect of self-reported hearing disability was modified by readiness such that with higher readiness, the positive predictive effect became stronger. None of the ‘novel’ predictors added significant predictive value. The results support the notion that predictors of hearing aid uptake are also predictive of entering a HAEP. This study shows that some of these predictors appear to be gender-specific or are dependent on a person’s readiness for change. After assuring the external validity of the predictors, an important next step would be to develop prediction rules for use in clinical practice, so that older persons’ hearing help seeking journey can be facilitated

    Impact of stimulus-related factors and hearing impairment on listening effort as indicated by pupil dilation

    Get PDF
    Previous research has reported effects of masker type and signal-to-noise ratio (SNR) on listening effort, as indicated by the peak pupil dilation (PPD) relative to baseline during speech recognition. At about 50% correct sentence recognition performance, increasing SNRs generally results in declining PPDs, indicating reduced effort. However, the decline in PPD over SNRs has been observed to be less pronounced for hearing-impaired (HI) compared to normal-hearing (NH) listeners. The presence of a competing talker during speech recognition generally resulted in larger PPDs as compared to the presence of a fluctuating or stationary background noise. The aim of the present study was to examine the interplay between hearing-status, a broad range of SNRs corresponding to sentence recognition performance varying from 0 to 100% correct, and different masker types (stationary noise and single-talker masker) on the PPD during speech perception. Twenty-five HI and 32 age-matched NH participants listened to sentences across a broad range of SNRs, masked with speech from a single talker (−25 dB to +15 dB SNR) or with stationary noise (−12 dB to +16 dB). Correct sentence recognition scores and pupil responses were recorded during stimulus presentation. With a stationary masker, NH listeners show maximum PPD across a relatively narrow range of low SNRs, while HI listeners show relatively large PPD across a wide range of ecological SNRs. With the single-talker masker, maximum PPD was observed in the mid-range of SNRs around 50% correct sentence recognition performance, while smaller PPDs were observed at lower and higher SNRs. Mixed-model ANOVAs revealed significant interactions between hearing-status and SNR on the PPD for both masker types. Our data show a different pattern of PPDs across SNRs between groups, which indicates that listening and the allocation of effort during listening in daily life environments may be different for NH and HI listeners

    The relationship between the intelligibility of time-compressed speech and speech in noise in young and elderly listeners

    Get PDF
    A conventional measure to determine the ability to understand speech in noisy backgrounds is the so-called speech reception threshold (SRT) for sentences. It yields the signal-to-noise ratio (in dB) for which half of the sentences are correctly perceived. The SRT defines to what degree speech must be audible to a listener in order to become just intelligible. There are indications that elderly listeners have greater difficulty in understanding speech in adverse listening conditions than young listeners. This may be partly due to the differences in hearing sensitivity (presbycusis), hence audibility, but other factors, such as temporal acuity, may also play a significant role. A potential measure for the temporal acuity may be the threshold to which speech can be accelerated, or compressed in time. A new test is introduced where the speech rate is varied adaptively. In analogy to the SRT, the time-compression threshold (or TCT) then is defined as the speech rate (expressed in syllables per second) for which half of the sentences are correctly perceived. In experiment I, the TCT test is introduced and normative data are provided. In experiment II, four groups of subjects (young and elderly normal-hearing and hearing-impaired subjects) participated, and the SRT's in stationary and fluctuating speech-shaped noise were determined, as well as the TCT. The results show that the SRT in fluctuating noise and the TCT are highly correlated. All tests indicate that, even after correction for the hearing loss, elderly normal-hearing, subjects perform worse than young normal-hearing subjects. The results indicate that the use of the TCT test or the SRT test in fluctuating noise is preferred over the SRT test in stationary noise. (C) 2002 Acoustical Society of Americ

    Effects of attention on the speech reception threshold and pupil response of people with impaired and normal hearing

    Get PDF
    For people with hearing difficulties, following a conversation in a noisy environment requires substantial cognitive processing, which is often perceived as effortful. Recent studies with normal hearing (NH) listeners showed that the pupil dilation response, a measure of cognitive processing load, is affected by ‘attention related’ processes. How these processes affect the pupil dilation response for hearing impaired (HI) listeners remains unknown. Therefore, the current study investigated the effect of auditory attention on various pupil response parameters for 15 NH adults (median age 51 yrs.) and 15 adults with mild to moderate sensorineural hearing loss (median age 52 yrs.). Both groups listened to two different sentences presented simultaneously, one to each ear and partially masked by stationary noise. Participants had to repeat either both sentences or only one, for which they had to divide or focus attention, respectively. When repeating one sentence, the target sentence location (left or right) was either randomized or blocked across trials, which in the latter case allowed for a better spatial focus of attention. The speech-to-noise ratio was adjusted to yield about 50% sentences correct for each task and condition. NH participants had lower (‘better’) speech reception thresholds (SRT) than HI participants. The pupil measures showed no between-group effects, with the exception of a shorter peak latency for HI participants, which indicated a shorter processing time. Both groups showed higher SRTs and a larger pupil dilation response when two sentences were processed instead of one. Additionally, SRTs were higher and dilation responses were larger for both groups when the target location was randomized instead of fixed. We conclude that although HI participants could cope with less noise than the NH group, their ability to focus attention on a single talker, thereby improving SRTs and lowering cognitive processing load, was preserved. Shorter peak latencies could indicate that HI listeners adapt their listening strategy by not processing some information, which reduces processing time and thereby listening effort

    Extended speech intelligibility index for the prediction of the speech reception threshold in fluctuating noise

    No full text
    The extension to the speech intelligibility index (SII; ANSI S3.5-1997 (1997)) proposed by Rhebergen and Versfeld [Rhebergen, K.S., and Versfeld, N.J. (2005). J. Acoust. Soc. Am. 117(4), 2181-2192] is able to predict for normal-hearing listeners the speech intelligibility in both stationary and fluctuating noise maskers with reasonable accuracy. The extended SII model was validated with speech reception threshold (SRT) data from the literature. However, further validation is required and the present paper describes SRT experiments with nonstationary noise conditions that are critical to the extended model. From these data, it can be concluded that the extended SII model is able to predict the SRTs for the majority of conditions, but that predictions are better when the extended SII model includes a function to account for forward maskin

    Quantifying and modeling the acoustic effects of compression on speech in noise

    No full text
    In this presentation a method is proposed that is able to separate a speech signal out of a noise signal after processing of the signal through wide-dynamic-range compression (WDRC). This technique reconstructs the speech signal and noise signal sample by sample separately using the gain factor of the WDRC, and can be used to quantify the acoustic effects of WDRC in noise. It will be shown that this technique is more accurate than a frequently used inversion technique, because the method is not affected by phase shifts that introduce distortion products in the reconstructed speech signal. As a result, the acoustic effects of WDRC can be measured more accurately. In addition, this reconstruction method allows modeling the speech intelligibility after non-linear signal processing in the Speech Intelligibility Index. With the aid of Speech Reception Threshold data it will be shown that this approach can give a good account for most existing dat

    Effect of Audibility and Suprathreshold Deficits on Speech Recognition for Listeners With Unilateral Hearing Loss

    No full text
    OBJECTIVES: We examined the influence of impaired processing (audibility and suprathreshold processes) on speech recognition in cases of sensorineural hearing loss. The influence of differences in central, or top-down, processing was reduced by comparing the performance of both ears in participants with a unilateral hearing loss (UHL). We examined the influence of reduced audibility and suprathreshold deficits on speech recognition in quiet and in noise. DESIGN: We measured speech recognition in quiet and stationary speech-shaped noise with consonant-vowel-consonant words and digital triplets in groups of adults with UHL (n = 19), normal hearing (n = 15), and bilateral hearing loss (n = 9). By comparing the scores of the unaffected ear (UHL+) and the affected ear (UHL-) in the UHL group, we were able to isolate the influence of peripheral hearing loss from individual top-down factors such as cognition, linguistic skills, age, and sex. RESULTS: Audibility is a very strong predictor for speech recognition in quiet. Audibility has a less pronounced influence on speech recognition in noise. We found that, for the current sample of listeners, more speech information is required for UHL- than for UHL+ to achieve the same performance. For digit triplets at 80 dBA, the speech recognition threshold in noise (SRT) for UHL- is on average 5.2 dB signal to noise ratio (SNR) poorer than UHL+. Analysis using the speech intelligibility index (SII) indicates that on average 2.1 dB SNR of this decrease can be attributed to suprathreshold deficits and 3.1 dB SNR to audibility. Furthermore, scores for speech recognition in quiet and in noise for UHL+ are comparable to those of normal-hearing listeners. CONCLUSIONS: Our data showed that suprathreshold deficits in addition to audibility play a considerable role in speech recognition in noise even at intensities well above hearing threshold

    Discrimination of changes in the spectral shape of noise bands

    Get PDF
    Discrimination experiments were performed for a change in the spectral shape of noise bands. The subject's task was to discriminate noise bands with a positive spectral slope from those with a negative spectral slope. Thresholds were measured at several bandwidths and center frequencies, as well as for several noise samples. Experiments were performed while roving the overall intensity. At a fixed center frequency of 1 kHz, sensitivity was best for bandwidths of 3–6 semitones (ST). At larger bandwidths, thresholds increased only slowly. At a fixed bandwidth of 1 ST, threshold hardly changed as a function of the center frequency. At a fixed bandwidth of 58 Hz, threshold was lowest near 500–1000 Hz. Model calculations show that the EWAIF model [Feth, Percept. Psychophys. 15, 375–378 (1974)] can account for the present results if the signal's bandwidth does not exceed 1 ST. The IWAIF model [Anantharaman et al., J. Acoust. Soc. Am. 94, 723–729 1993] can account for the present results only if the signal's bandwidth is smaller than 1 ST but larger than about 25 Hz. Results obtained with broadband signals could be described only qualitatively with the multichannel model [Durlach et al., J. Acoust. Soc. Am. 80, 63–72 (1986)]. Then, the model needs the assumption that either the output of the different frequency bands cannot be optimally combined, or that only two bands are used in the discrimination process. The present results are compared with those obtained with two-tone complexes measured under identical conditions [Versfeld and Houtsma, J. Acoust. Soc. Am. 98, 807–816 (1995)]
    corecore