38 research outputs found

    Cost-effectiveness of a vocational enablement protocol for employees with hearing impairment; design of a randomized controlled trial

    Get PDF
    Background: Hearing impairment at the workplace, and the resulting psychosocial problems are a major health problem with substantial costs for employees, companies, and society. Therefore, it is important to develop interventions to support hearing impaired employees. The objective of this article is to describe the design of a randomized controlled trial evaluating the (cost-) effectiveness of a Vocational Enablement Protocol (VEP) compared with usual care. Methods/Design. Participants will be selected with the 'Hearing and Distress Screener'. The study population will consist of 160 hearing impaired employees. The VEP intervention group will be compared with usual care. The VEP integrated care programme consists of a multidisciplinary assessment of auditory function, work demands, and personal characteristics. The goal of the intervention is to facilitate participation in work. The primary outcome measure of the study is 'need for recovery after work'. Secondary outcome measures are coping with hearing impairment, distress, self-efficacy, psychosocial workload, job control, general health status, sick leave, work productivity, and health care use. Outcome measures will be assessed by questionnaires at baseline, and 3, 6, 9, and 12 months after baseline. The economic evaluation will be performed from both a societal and a company perspective. A process evaluation will also be performed. Discussion. Interventions addressing occupational difficulties of hearing impaired employees are rare but highly needed. If the VEP integrated care programme proves to be (cost-) effective, the intervention can have an impact on the well-being of hearing impaired employees, and thereby, on the costs for the company as well for the society. Trial registration. Netherlands Trial Register (NTR): NTR2782. © 2012 Gussenhoven et al; BioMed Central Ltd

    Binaural Recordings in Natural Acoustic Environments:Estimates of Speech-Likeness and Interaural Parameters

    No full text
    Binaural acoustic recordings were made in multiple natural environments, which were chosen to be similar to those reported to be difficult for listeners with impaired hearing. These environments include natural conversations that take place in the presence of other sound sources as found in restaurants, walking or biking in the city, and so on. Sounds from these environments were recorded binaurally with in-the-ear microphones and were analyzed with respect to speech-likeness measures and interaural difference measures. The speech-likeness measures were based on amplitude–modulation patterns within frequency bands and were estimated for 1-s time-slices. The interaural difference measures included interaural coherence, interaural time difference, and interaural level difference, which were estimated for time-slices of 20-ms duration. These binaural measures were documented for one-fourth-octave frequency bands centered at 500 Hz and for the envelopes of one-fourth-octave bands centered at 2000 Hz. For comparison purposes, the same speech-likeness and interaural difference measures were computed for a set of virtual recordings that mimic typical clinical test configurations. These virtual recordings were created by filtering anechoic waveforms with available head-related transfer functions and combining them to create multiple source combinations. Overall, the speech-likeness results show large variability within and between environments, and they demonstrate the importance of having information from both ears available. Furthermore, the interaural parameter results show that the natural recordings contain a relatively small proportion of time-slices with high coherence compared with the virtual recordings; however, when present, binaural cues might be used for selecting intervals with good speech intelligibility for individual sources

    Speech Recognition Abilities in Normal-Hearing Children 4 to 12 Years of Age in Stationary and Interrupted Noise

    No full text
    OBJECTIVES: The main purpose of this study was to examine developmental effects for speech recognition in noise abilities for normal-hearing children in several listening conditions, relevant for daily life. Our aim was to study the auditory component in these listening abilities by using a test that was designed to minimize the dependency on nonauditory factors, the digits-in-noise (DIN) test. Secondary aims were to examine the feasibility of the DIN test for children, and to establish age-dependent normative data for diotic and dichotic listening conditions in both stationary and interrupted noise. DESIGN: In experiment 1, a newly designed pediatric DIN (pDIN) test was compared with the standard DIN test. Major differences with the DIN test are that the pDIN test uses 79% correct instead of 50% correct as a target point, single digits (except 0) instead of triplets, and animations in the test procedure. In this experiment, 43 normal-hearing subjects between 4 and 12 years of age and 10 adult subjects participated. The authors measured the monaural speech reception threshold for both DIN test and pDIN test using headphones. Experiment 2 used the standard DIN test to measure speech reception thresholds in noise in 112 normal-hearing children between 4 and 12 years of age and 33 adults. The DIN test was applied using headphones in stationary and interrupted noise, and in diotic and dichotic conditions, to study also binaural unmasking and the benefit of listening in the gaps. RESULTS: Most children could reliably do both pDIN test and DIN test, and measurement errors for the pDIN test were comparable between children and adults. There was no significant difference between the score for the pDIN test and that of the DIN test. Speech recognition scores increase with age for all conditions tested, and performance is adult-like by 10 to 12 years of age in stationary noise but not interrupted noise. The youngest, 4-year-old children have speech reception thresholds 3 to 7 dB less favorable than adults, depending on test conditions. The authors found significant age effects on binaural unmasking and fluctuating masker benefit, even after correction for the lower baseline speech reception threshold of adults in stationary noise. CONCLUSIONS: Speech recognition in noise abilities develop well into adolescence, and young children need a more favorable signal-to-noise ratio than adults for all listening conditions. Speech recognition abilities in children in stationary and interrupted noise can accurately and reliably be tested using the DIN test. A pediatric version of the test was shown to be unnecessary. Normative data were established for the DIN test in stationary and fluctuating maskers, and in diotic and dichotic conditions. The DIN test can thus be used to test speech recognition abilities for normal-hearing children from the age of 4 years and older

    The precedence effect for lateralization at low sensation levels

    No full text
    Using dichotic signals presented by headphone, stimulus onset dominance (the precedence effect) for lateralization at low sensation levels was investigated for five normal hearing subjects. Stimuli were based on 2400-Hz low pass filtered 5-ms noise bursts. We used the paradigm, as described by Aoki and Houtgast (Hear. Res., 59 (1992) 25-30) and Houtgast and Aoki (Hear. Res., 72 (1994) 29-36), in which the stimulus is divided into a leading and a lagging part with opposite lateralization cues (i.e. an interaural time delay of 0.2 ms). The occurrence of onset dominance was investigated by measuring lateral perception of the stimulus, with fixed equal duration of leading and lagging part, while decreasing absolute signal level or adding a filtered white noise with the signal level set at 65 dBA. The dominance of the leading part was quantified by measuring the perceived lateral position of the stimulus as a function of the relative duration of the leading (and thus the lagging) part. This was done at about 45 dB SL without masking noise and also at a signal-to-noise ratio resulting in a sensation level of 10 dB. The occurrence and strength of the precedence effect was found to depend on sensation level, which was decreased either by lowering the signal level or by adding noise. With the present paradigm, besides a decreased lateralization accuracy, a decrease in the precedence effect was found for sensation levels below about 30-40 dB. In daily-life conditions, with a sensation level in noise of typically 10 dB, the onset dominance was still manifest, albeit degraded to some extent. (C) 2000 Elsevier Science B.V

    Effects of reverberation and masker fluctuations on binaural unmasking of speech

    No full text
    In daily life, listeners use two ears to understand speech in situations which typically include reverberation and non-stationary noise. In headphone experiments, the binaural benefit for speech in noise is often expressed as the difference in speech reception threshold between diotic (N(0)S(0)) and dichotic (N(0)S(?)) conditions. This binaural advantage (BA), arising from the use of inter-aural phase differences, is about 5-6 dB in stationary noise, but may be lower in everyday conditions. In the current study, BA was measured in various combinations of noise and artificially created diotic reverberation, for normal-hearing and hearing-impaired listeners. Speech-intelligibility models were applied to quantify the combined effects. Results showed that in stationary noise, diotic reverberation did not affect BA. BA was reduced in conditions where the masker fluctuated. With additional reverberation, however, it was restored. Results for both normal-hearing and hearing-impaired listeners were accounted for by assuming that binaural unmasking is only effectively realized at low instantaneous speech-to-noise ratios (SNRs). The observed BA was related to the distribution of SNRs resulting from fluctuations, reverberation, and peripheral processing. It appears that masker fluctuations and reverberation, both relevant for everyday communication, interact in their effects on binaural unmasking and need to be considered together

    Effect of Audibility and Suprathreshold Deficits on Speech Recognition for Listeners With Unilateral Hearing Loss

    No full text
    OBJECTIVES: We examined the influence of impaired processing (audibility and suprathreshold processes) on speech recognition in cases of sensorineural hearing loss. The influence of differences in central, or top-down, processing was reduced by comparing the performance of both ears in participants with a unilateral hearing loss (UHL). We examined the influence of reduced audibility and suprathreshold deficits on speech recognition in quiet and in noise. DESIGN: We measured speech recognition in quiet and stationary speech-shaped noise with consonant-vowel-consonant words and digital triplets in groups of adults with UHL (n = 19), normal hearing (n = 15), and bilateral hearing loss (n = 9). By comparing the scores of the unaffected ear (UHL+) and the affected ear (UHL-) in the UHL group, we were able to isolate the influence of peripheral hearing loss from individual top-down factors such as cognition, linguistic skills, age, and sex. RESULTS: Audibility is a very strong predictor for speech recognition in quiet. Audibility has a less pronounced influence on speech recognition in noise. We found that, for the current sample of listeners, more speech information is required for UHL- than for UHL+ to achieve the same performance. For digit triplets at 80 dBA, the speech recognition threshold in noise (SRT) for UHL- is on average 5.2 dB signal to noise ratio (SNR) poorer than UHL+. Analysis using the speech intelligibility index (SII) indicates that on average 2.1 dB SNR of this decrease can be attributed to suprathreshold deficits and 3.1 dB SNR to audibility. Furthermore, scores for speech recognition in quiet and in noise for UHL+ are comparable to those of normal-hearing listeners. CONCLUSIONS: Our data showed that suprathreshold deficits in addition to audibility play a considerable role in speech recognition in noise even at intensities well above hearing threshold

    Assessment of speech recognition abilities in quiet and in noise: a comparison between self-administered home testing and testing in the clinic for adult cochlear implant users

    No full text
    Self speech recognition tests in quiet and noise at home are compared to the standard tests performed in the clinic. Potential effects of stimuli presentation modes (loudspeaker or audio cable) and assessment (clinician or self-assessment at home) on test results were investigated. Speech recognition in quiet was assessed using the standard Dutch test with monosyllabic words. Speech recognition in noise was assessed with the digits-in-noise test. Sixteen experienced CI users (aged between 44 and 83 years) participated. No significant difference was observed in speech recognition in quiet between and presentation modes. Speech recognition in noise was significantly better with the audio cable than with the loudspeaker. There was no significant difference in speech recognition in quiet at 65 dB and in speech recognition in noise between self-assessment at home and testing in the clinic. At 55 dB, speech recognition assessed at home was slightly but significantly better than that assessed in the clinic. The results demonstrate that it is feasible for experienced CI users to perform self-administered speech recognition tests at home. Self-assessment by CI users of speech recognition in quiet and noise within the home environment could serve as an alternative to the tests performed in the clinic

    The relationship between nonverbal cognitive functions and hearing loss

    No full text
    Purpose: This study investigated the relationship between hearing loss and memory and attention when nonverbal, visually presented cognitive tests are used. Method: Hearing loss (pure-tone audiometry) and IQ were measured in 30 participants with mild to severe hearing loss. Participants performed cognitive tests of pattern recognition memory, sustained visual attention, and spatial working memory. All cognitive tests were selected from the Cambridge Neuropsychological Test Automated Battery (CANTAB expedio; Cambridge Cognition Ltd., 2002). Regression analyses were performed to examine the relationship between hearing loss and these cognitive measures of memory and attention when controlling for age and IQ. Results: The data indicate that hearing loss was not associated with decreased performance on the memory and attention tests. In contrast, participants with more severe hearing loss made more use of an efficient strategy during performance on the spatial working memory subtest. This result might reflect the more extensive use of working memory in daily life to compensate for the loss of speech information. Conclusions: The authors conclude that the use of nonverbal tests is essential when testing cognitive functions of individuals with hearing loss
    corecore