Designing Hearing Aid Technology to Support Benefits in Demanding Situations, Part 2

Abstract

Most hearing aid manufacturers have chosen to implement the dynamic range of the In recent years, the main focus of the hearing aid industry has been on optimizing sound in normal, everyday situations. State-of-the-art wireless hearing aids typically offer a vast range of sophisticated features (eg, multichannel processing, adaptive directionality, feedback canceling, noise reduction, frequency shifting, multistage compression strategies, etc) that are very efficient in optimizing sound quality at normal input levels. However, there are still opportunities for improvements in other, less frequent environments, such as those with loud sound levels. This paper reports a recent trial that tested the efficacy of a new wireless hearing aid in a situation characterized by loud inputs. As discussed in Part 1 of this article in the March 2013 HR, 1 until now, even highly sophisticated wireless hearing aids have not been very adept at handling loud input levels. As a rule, sound levels exceeding 100 dB SPL are distorted because the analog-to-digital (A/D) converter in the hearing aid has an upper limit of about 100 dB SPL. If the input signal exceeds the A/D converter's input range (ie, its upper limit), the A/D converter is overloaded, resulting in highly perceptible distortion (clipping)-typically perceived as a "crackling" or "raspy" sound quality by hearing aid wearers. Once distortion is introduced into the signal, it is impossible to improve the sound quality at a later stage in the signal processing. An input range of approximately 100 dB SPL is sufficient if speech perception at normal levels is the only concern, since the loudest speech components are usually within 85-90 dB SPL, even at a shout. However, other types of input are much more intense. For instance, music played at a medium to loud volume level may easily exceed 100 dB SPL. 2,3 As an alternative to allowing clipping distortion, some hearing aid manufacturers employ a technique known as Automatic Gain Control, Input (AGCi) (also known as input compression). Basically, AGCi constantly compresses the input signal to make sure that it remains below the distortion limit of the A/D converter. However, a major drawback of this technique is that, while it eliminates clipping artifacts, it can also introduce dynamic artifacts. These include the smearing of intensity cues, "pumping," or a "dull" sound quality. While there is a solid body of literature describing the detrimental effect of peak clipping and automatic gain control (AGC) on speech perception and subjectively perceived sound quality, 4-8 the authors found no studies that directly compare the effect of artifacts introduced by peak clipping and AGCi at the A/D conversion stage. Even though it is difficult to decide which technique is superior based on the available literature, the fact remains that both techniques have been found in the literature to have a relatively strong detrimental effect on speech comprehension and perceived sound quality-suggesting that the best strategy is to avoid both, if at all possible

    Similar works

    Full text

    thumbnail-image

    Available Versions