17 research outputs found

    Perceived Sound Quality Dimensions Influencing Frequency-Gain Shaping Preferences for Hearing Aid-Amplified Speech and Music

    Get PDF
    © The Author(s) 2021. Hearing aids are typically fitted using speech-based prescriptive formulae to make speech more intelligible. Individual preferences may vary from these prescriptions and may also vary with signal type. It is important to consider what motivates listener preferences and how those preferences can inform hearing aid processing so that assistive listening devices can best be tailored for hearing aid users. Therefore, this study explored preferred frequency-gain shaping relative to prescribed gain for speech and music samples. Preferred gain was determined for 22 listeners with mild sloping to moderately severe hearing loss relative to individually prescribed amplification while listening to samples of male speech, female speech, pop music, and classical music across low-, mid-, and high-frequency bands. Samples were amplified using a fast-acting compression hearing aid simulator. Preferences were determined using an adaptive paired comparison procedure. Listeners then rated speech and music samples processed using prescribed and preferred shaping across different sound quality descriptors. On average, low-frequency gain was significantly increased relative to the prescription for all stimuli and most substantially for pop and classical music. High-frequency gain was decreased significantly for pop music and male speech. Gain adjustments, particularly in the mid- and high-frequency bands, varied considerably between listeners. Music preferences were driven by changes in perceived fullness and sharpness, whereas speech preferences were driven by changes in perceived intelligibility and loudness. The results generally support the use of prescribed amplification to optimize speech intelligibility and alternative amplification for music listening for most listeners

    On the role of head-related transfer function spectral notches in the judgement of sound source elevation

    No full text
    Presented at 2nd International Conference on Auditory Display (ICAD), Santa Fe, New Mexico, November 7-9, 1994.Using a simple model of sound source elevation judgment, an attempt was made to predict two aspects of listeners' localization behavior from measurements of the positions of the primary high-frequency notch in their head-related transfer functions. These characteristics were: (1) the scatter in elevation judgments and (2) possible biases in perceived elevation introduced by front-back and back-front reversals. Although significant differences were found among the notch-frequency patterns for individual subjects, the model was not capable of predicting differences in judgment behavior. This suggests that a simple model of elevation perception based on a single spectral notch frequency is inadequate

    How and Why Does Spatial-Hearing Ability Differ among Listeners? What Is the Role of Learning and Multisensory Interactions?

    No full text
    Spatial-hearing ability has been found to vary widely across listeners. A survey of the existing auditory-space perception literature suggests that three main types of factors may account for this variability: - physical factors, e.g., acoustical characteristics related to sound-localization cues, - perceptual factors, e.g., sensory/cognitive processing, perceptual learning, multisensory interactions, - and methodological factors, e.g., differences in stimulus presentation methods across studies. However, the extent to which these–and perhaps other, still unidentified—factors actually contribute to the observed variability in spatial hearing across individuals with normal hearing or within special populations (e.g., hearing-impaired listeners) remains largely unknown. Likewise, the role of perceptual learning and multisensory interactions in the emergence of a multimodal but unified representation of “auditory space,” is still an active topic of research. A better characterization and understanding of the determinants of inter-individual variability in spatial hearing, and of its relationship with perceptual learning and multisensory interactions, would have numerous benefits. In particular, it would enhance the design of rehabilitative devices and of human-machine interfaces involving auditory, or multimodal space perception, such as virtual auditory/multimodal displays in aeronautics, or navigational aids for the visually impaired. For this Research Topic, we have considered manuscripts that: - present new methods, or review existing methods, for the study of inter-individual differences; - present new data (or review existing) data, concerning acoustical features relevant for explaining inter-individual differences in sound-localization performance; - present new (or review existing) psychophysical or neurophysiological findings concerning spatial hearing and/or auditory perceptual learning, and/or multisensory interactions in humans (normal or impaired, young or older listeners) or other species; - discuss the influence of inter-individual differences on the design and use of assistive listening devices (rehabilitation) or human-machine interfaces involving spatial hearing or multimodal perception of space (ergonomy)

    Decoding auditory spatial and emotional information encoding using multivariate versus univariate techniques

    No full text
    Emotion can have diverse effects on behaviour and perception, modulating function in some circumstances, and sometimes having little effect. Recently, it was identified that part of the heterogeneity of emotional effects could be due to a dissociable representation of emotion in dual pathway models of sensory processing. Our previous fMRI experiment using traditional univariate analyses showed that emotion modulated processing in the auditory \u27what\u27 but not \u27where\u27 processing pathway. The current study aims to further investigate this dissociation using a more recently emerging multi-voxel pattern analysis searchlight approach. While undergoing fMRI, participants localized sounds of varying emotional content. A searchlight multi-voxel pattern analysis was conducted to identify activity patterns predictive of sound location and/or emotion. Relative to the prior univariate analysis, MVPA indicated larger overlapping spatial and emotional representations of sound within early secondary regions associated with auditory localization. However, consistent with the univariate analysis, these two dimensions were increasingly segregated in late secondary and tertiary regions of the auditory processing streams. These results, while complimentary to our original univariate analyses, highlight the utility of multiple analytic approaches for neuroimaging, particularly for neural processes with known representations dependent on population coding

    Audiological outcome measures with the BONEBRIDGE transcutaneous bone conduction hearing implant: impact of noise, reverberation and signal processing features

    No full text
    Objective: To assess the performance of an active transcutaneous implantable-bone conduction device (TI-BCD), and to evaluate the benefit of device digital signal processing (DSP) features in challenging listening environments. Design: Participants were tested at 1- and 3-month post-activation of the TI-BCD. At each session, aided and unaided phoneme perception was assessed using the Ling-6 test. Speech reception thresholds (SRTs) and quality ratings of speech and music samples were collected in noisy and reverberant environments, with and without the DSP features. Self-assessment of the device performance was obtained using the Abbreviated Profile of Hearing Aid Benefit (APHAB) questionnaire. Study sample: Six adults with conductive or mixed hearing loss. Results: Average SRTs were 2.9 and 12.3 dB in low and high reverberation environments, respectively, which improved to −1.7 and 8.7 dB, respectively with the DSP features. In addition, speech quality ratings improved by 23 points with the DSP features when averaged across all environmental conditions. Improvement scores on APHAB scales revealed a statistically significant aided benefit. Conclusions: Noise and reverberation significantly impacted speech recognition performance and perceived sound quality. DSP features (directional microphone processing and adaptive noise reduction) significantly enhanced subjects’ performance in these challenging listening environments
    corecore