762 research outputs found

    The minimum monitoring signal-to-noise ratio for off-axis signals and its implications for directional hearing aids

    Get PDF
    The signal-to-noise ratio (SNR) benefit of hearing aid directional microphones is dependent on the angle of the listener relative to the target, something that can change drastically and dynamically in a typical group conversation. When a new target signal is significantly off-axis, directional microphones lead to slower target orientation, more complex movements, and more reversals. This raises the question of whether there is an optimal design for directional microphones. In principle an ideal microphone would provide the user with sufficient directionality to help with speech understanding, but not attenuate off-axis signals so strongly that orienting to new signals was difficult or impossible. We investigated the latter part of this question. In order to measure the minimal monitoring SNR for reliable orientation to off-axis signals, we measured head-orienting behaviour towards targets of varying SNRs and locations for listeners with mild to moderate bilateral symmetrical hearing loss. Listeners were required to turn and face a female talker in background noise and movements were tracked using a head-mounted crown and infrared system that recorded yaw in a ring of loudspeakers. The target appeared randomly at ± 45, 90 or 135° from the start point. The results showed that as the target SNR decreased from 0 dB to −18 dB, first movement duration and initial misorientation count increased, then fixation error, and finally reversals increased. Increasing the target angle increased movement duration at all SNRs, decreased reversals (above −12 dB target SNR), and had little to no effect on initial misorientations. These results suggest that listeners experience some difficulty orienting towards sources as the target SNR drops below −6 dB, and that if one intends to make a directional microphone that is usable in a moving conversation, then off-axis attenuation should be no more than 12 dB

    Changes in auditory perceptions and cortex resulting from hearing recovery after extended congenital unilateral hearing loss

    Get PDF
    Monaural hearing induces auditory system reorganization. Imbalanced input also degrades time-intensity cues for sound localization and signal segregation for listening in noise. While there have been studies of bilateral auditory deprivation and later hearing restoration (e.g. cochlear implants), less is known about unilateral auditory deprivation and subsequent hearing improvement. We investigated effects of long-term congenital unilateral hearing loss on localization, speech understanding, and cortical organization following hearing recovery. Hearing in the congenitally affected ear of a 41 year old female improved significantly after stapedotomy and reconstruction. Pre-operative hearing threshold levels showed unilateral, mixed, moderately-severe to profound hearing loss. The contralateral ear had hearing threshold levels within normal limits. Testing was completed prior to, and three and nine months after surgery. Measurements were of sound localization with intensity-roved stimuli and speech recognition in various noise conditions. We also evoked magnetic resonance signals with monaural stimulation to the unaffected ear. Activation magnitudes were determined in core, belt, and parabelt auditory cortex regions via an interrupted single event design. Hearing improvement following 40 years of congenital unilateral hearing loss resulted in substantially improved sound localization and speech recognition in noise. Auditory cortex also reorganized. Contralateral auditory cortex responses were increased after hearing recovery and the extent of activated cortex was bilateral, including a greater portion of the posterior superior temporal plane. Thus, prolonged predominant monaural stimulation did not prevent auditory system changes consequent to restored binaural hearing. Results support future research of unilateral auditory deprivation effects and plasticity, with consideration for length of deprivation, age at hearing correction, degree and type of hearing loss

    EVALUATION OF AUDITORY CORTICAL PLASTICITY FROM FIRST AMPLIFICATION TO ONE YEAR OF HEARING AID USE: THE RELATIONSHIP BETWEEN AIDED CORTICAL AUDITORY EVOKED POTENTIALS (ACAEPs) AND SPEECH PERCEPTION OUTCOMES AMONG HEARING-IMPAIRED ADULT PATIENTS

    Get PDF
    Over the last decade, aided cortical auditory evoked potentials (ACAEPs) have continued to be a focus of interest due to the lack of adequate tools to objectively assess cortical auditory activity in response to amplified stimuli. The majority of authors have investigated the direct relationship between behavioral thresholds and ACAEPs and the evolution of ACAEP waves among children with sensorineural hearing loss (SNHL) undergoing rehabilitation. In contrast, scarce data are available regarding changes in ACAEPs over time in adult hearing aid users, particularly in relation to speech perception outcomes. The main goal of this project was to investigate the relationship between ACAEPs and speech perception capability over time in post-lingual SNHL adult patients who were first-time hearing aid users. We hypothesized that, in patients with better speech understanding, a modification of the P1-N1-P2 complex could be expected as a result of neuroplastic changes due to hearing aid amplification. A longitudinal prospective clinical study was conducted on 72 new hearing aid users suffering from symmetrical, sloping SNHL. Patients were assessed at three different time points: baseline (T0), 6 months after the initial assessment (T6), and 12 months after the initial assessment (T12). All the participants went through the same evaluation protocol, which included pure-tone audiometry, speech audiometry tests, ACAEPs recorded with two different stimuli (1000 Hz and 2000 Hz) and questionnaires assessing hearing aid benefit. Analysis of amplitude values at the three different time points demonstrated an increasing tendency for all waves in both experimental conditions (p<0.01). Latencies seemed to become shorter from T0 to T12 for each wave and in the case of 1 kHz and 2 kHz stimuli. (p<0.05). Linear regression analysis found that only P2 amplitude showed a statistically significant increase in its variation while matrix sentence test (MST) and speech intellection threshold (SIT) decreased in both experimental conditions, even when the analysis was adjusted for age and daily hearing aid use (p<0.05). The data collected in this study provide new evidence regarding the relationship between ACAEPs and the speech recognition capability of adults who are new hearing aid users. In both experimental conditions, we observed larger P2 amplitude in patients with better speech perception outcomes. It should be underlined that, even though P2 may reflect auditory processing beyond sensation, its increase could be an expression of neural activity associated with the acquisition process driven by exposure to sounds and speech. The observation that P2 amplitude tended to improve as SIT and MST scores decreased might be, in the future, a further object of investigation to assess its reliability as a marker of speech perception improvement; it may assist hearing aid dispensers and audiologists as a source of feedback in the evaluation of listening benefits in hard-to-test patients

    Non-Quiet Listening for Children with Hearing Loss: An Evaluation of Amplification Needs and Strategies

    Get PDF
    The goals of this project were to identify and evaluate strategies for non-quiet listening needs of children with hearing loss who wear hearing instruments. Three studies were undertaken: 1) an exploration of the listening environments and situations experienced by children from daycare to high school during the school-day; 2) a comparative evaluation of consonant recognition, sentence recognition in noise, and loudness perception with the Desired Sensation Level version 5 (DSL v5) Quiet and Noise prescriptions and 3) a comparative evaluation of sentence recognition in noise and loudness perception with DSL v 5 Quiet and Noise paired with the hearing instrument features of directional microphone and digital noise reduction (DNR) technology. Results of the first study showed that children experience a wide variety of listening environments and situations, most of which can be classified as “non-quiet”. This finding confirms the need for the development of processing strategies for children listening in non-quiet environments and situations. The second study showed that the DSL v5 Noise prescription does not negatively impact consonant recognition except at low levels, with no significant differences in sentence recognition in noise. Improved comfort for loud sounds was afforded by DSL v5 Noise compared to DSL v5 Quiet. The third study showed that the optimal combination of prescription and hearing instrument features tested was DSL v5 Noise with a directional microphone. The results of these three studies offer a starting point for the development of a protocol for providing a non-quiet listening strategy for children who wear hearing instruments. This result is a significant contribution to the currently discrepant guidelines across countries and pediatric audiology organizations

    Coding Strategies for Cochlear Implants Under Adverse Environments

    Get PDF
    Cochlear implants are electronic prosthetic devices that restores partial hearing in patients with severe to profound hearing loss. Although most coding strategies have significantly improved the perception of speech in quite listening conditions, there remains limitations on speech perception under adverse environments such as in background noise, reverberation and band-limited channels, and we propose strategies that improve the intelligibility of speech transmitted over the telephone networks, reverberated speech and speech in the presence of background noise. For telephone processed speech, we propose to examine the effects of adding low-frequency and high- frequency information to the band-limited telephone speech. Four listening conditions were designed to simulate the receiving frequency characteristics of telephone handsets. Results indicated improvement in cochlear implant and bimodal listening when telephone speech was augmented with high frequency information and therefore this study provides support for design of algorithms to extend the bandwidth towards higher frequencies. The results also indicated added benefit from hearing aids for bimodal listeners in all four types of listening conditions. Speech understanding in acoustically reverberant environments is always a difficult task for hearing impaired listeners. Reverberated sounds consists of direct sound, early reflections and late reflections. Late reflections are known to be detrimental to speech intelligibility. In this study, we propose a reverberation suppression strategy based on spectral subtraction to suppress the reverberant energies from late reflections. Results from listening tests for two reverberant conditions (RT60 = 0.3s and 1.0s) indicated significant improvement when stimuli was processed with SS strategy. The proposed strategy operates with little to no prior information on the signal and the room characteristics and therefore, can potentially be implemented in real-time CI speech processors. For speech in background noise, we propose a mechanism underlying the contribution of harmonics to the benefit of electroacoustic stimulations in cochlear implants. The proposed strategy is based on harmonic modeling and uses synthesis driven approach to synthesize the harmonics in voiced segments of speech. Based on objective measures, results indicated improvement in speech quality. This study warrants further work into development of algorithms to regenerate harmonics of voiced segments in the presence of noise

    Binary Masking &amp; Speech Intelligibility

    Get PDF

    Measuring listening effort using physiological, behavioral and subjective methods in normal hearing subjects: Effect of signal to noise ratio and presentation level

    Get PDF
    The main objective of the study is to compare the effectiveness of pupillometry, working memory and subjective rating scale —the physiological, behavioral, and subjective measures of listening effort— at different signal to noise ratios (SNR) and presentation levels: when administered together. Eleven young normal hearing individuals with mean age of 21.7 years (SD=1.9 years) participated in the study. The HINT sentences were used for speech perception in noise task. The listening effort was quantified using peak pupil dilation, working memory, working memory difference, subjective rating of listening and recall effort. The rating of perceived performance, frustration level and disengagement were also obtained. Using a repeated measure design, we examined how SNR (+6 dB to -10 dB) and presentation level (50- and 65-dB SPL) affect listening effort. Tobii eye-tracker software and custom MATLAB programing were used for stimulus presentation and data analysis. SNR had significant effect on peak pupil dilation, working memory, working memory difference, and subjective rating of listening effort. Speech intelligibility had significant correlation with all of the listening effort measures except working memory difference. The listening effort measures did not correlate significantly when controlled for speech intelligibility indicating different underlying constructs. When effect sizes are compared working memory (η2p = 0.98) was most sensitive to SNR effect, followed by subjective rating of listening effort (η2p = 0.84), working memory difference (η2p = 0.52) and peak pupil dilation (η2p = 0.40). Only peak pupil dilation showed significant effect of presentation level. The physiological, behavioral and subjective measures of listening effort have different underlying constructs and the sensitivity of these measures varies in representing the effect of SNR and presentation level. The individual data trend analysis shows different breakdown points for physiological and behavioral and subjective measures. There is a need to further explore the relationship of listening effort measures across different SNRs also how these relationship changes in persons with hearing loss

    The impact of directional listening on perceived localization ability

    Get PDF
    An important purpose of hearing is to aid communication. Because hearing-in-noise is of primary importance to individuals who seek remediation for hearing impairment, it has been the primary objective of advances in technology. Directional microphone technology is the most promising way to address this problem. Another important role of hearing is localization, allowing one to sense one's environment and feel safe and secure. The properties of the listening environment that are altered with directional microphone technology have the potential to significantly impair localization ability. The purpose of this investigation was to determine the impact of listening with directional microphone technology on individuals' self-perceived level of localization disability and concurrent handicap. Participants included 57 unaided subjects, later randomly assigned to participate in one of three aided groups of 19 individuals each, who used omni-directional microphone only amplification, directional microphone only amplification, or toggle-switch equipped hearing aids that allowed user discretion over the directional microphone properties of the instruments. Comparisons were made between the unaided group responses and those of the subjects after having worn amplification for three months. Additionally, comparisons between the directional microphone only group responses and each of the other two aided groups' responses were made. No significant differences were found. Hearing aids with omni-directional microphones, directional-only microphones, and those that are equipped with a toggle-switch, neither increased nor decreased the self-perceived level of ability to tell the location of sound or the level of withdrawal from situations where localization ability was a factor. Concurrently, directional-microphone only technology did not significantly worsen or improve these factors as compared to the other two microphone configurations. Future research should include objective measures of localization ability using the same paradigm employed herein. If the use of directional microphone technology has an objective impact on localization, clinicians might be advised to counsel their patients to be careful moving in their environment even though they do not perceive a problem with localization. If ultimately no significant differences in either objective or subjective measures are found, then concern over decreases in quality of life and safety with directional microphone use need no longer be considered

    The Effect of Electrode Placement on Cochlear Implant Function and Outcomes

    Get PDF
    Cochlear implants have been an effective treatment for restoring profound sensorineural hearing loss to those who do not benefit from traditional hearing aids. Advances in surgical technique and electrode design allow for preservation of residual hearing. This allows cochlear implant candidacy criteria to expand to those with good low frequency hearing and severe high frequency hearing loss above 1000 Hz with poor speech discrimination. With a less traumatic surgical approach, low frequency hearing can be preserved resulting in combined low frequency auditory perception and mid- to high-frequency electric perception resulting in electro-acoustic stimulation (EAS). Despite the improvements in cochlear implantation, outcomes continue to vary significantly from one user to another. The variance in performance may potentially be due to the placement of the electrode within in the cochlea. This study focused on performance of patients compared to insertion depth, age, pitch perception and electrophysiologic measures. Patients with residual hearing were included and outcome measures were measured via speech perception tests. Radiographic imaging confirmed insertion depth, and the change in pure tone average was compared to this depth. Hearing preservation was further accomplished with two patients who presented with residual mid and high frequency hearing. Custom atraumatic electrodes were inserted, and hearing was preserved across all frequencies. These cases allowed for electric and acoustic pitch matching experiments to be conducted in the same ear providing information on where in the cochlear the implant is actually stimulating. Several pairs along the cochlea were run between electric and acoustic pitches at varying rates of stimulation. Place to pitch mismatch varied depending on the area within the cochlea. Lastly, objective measures were used in attempt to determine the variance in outcomes. Two main contributing factors govern implant performance, 1) the ability of the processor to effectively deliver the electrical signal to the ear, and 2) the patient's ability to process the information. Peripheral mechanisms were analyzed with the electric compound action potential and its amplitude growth function. The slope of the amplitude growth function was measured at the corresponding electrodes and compared to speech discrimination scores. Steeper slopes correlated with increased word understanding abilities. For further insight into the health of the cochlea, age effects were compared to hearing preservation. The pure tone averages were calculated before and after surgery. Pure tone averages following surgery elevated with increased age suggesting that the elderly may be at more risk for loss of residual hearing as compared to the general population

    Deep Learning-based Speech Enhancement for Real-life Applications

    Get PDF
    Speech enhancement is the process of improving speech quality and intelligibility by suppressing noise. Inspired by the outstanding performance of the deep learning approach for speech enhancement, this thesis aims to add to this research area through the following contributions. The thesis presents an experimental analysis of different deep neural networks for speech enhancement, to compare their performance and investigate factors and approaches that improve the performance. The outcomes of this analysis facilitate the development of better speech enhancement networks in this work. Moreover, this thesis proposes a new deep convolutional denoising autoencoderbased speech enhancement architecture, in which strided and dilated convolutions were applied to improve the performance while keeping network complexity to a minimum. Furthermore, a two-stage speech enhancement approach is proposed that reduces distortion, by performing a speech denoising first stage in the frequency domain, followed by a second speech reconstruction stage in the time domain. This approach was proven to reduce speech distortion, leading to better overall quality of the processed speech in comparison to state-of-the-art speech enhancement models. Finally, the work presents two deep neural network speech enhancement architectures for hearing aids and automatic speech recognition, as two real-world speech enhancement applications. A smart speech enhancement architecture was proposed for hearing aids, which is an integrated hearing aid and alert system. This architecture enhances both speech and important emergency noise, and only eliminates undesired noise. The results show that this idea is applicable to improve the performance of hearing aids. On the other hand, the architecture proposed for automatic speech recognition solves the mismatch issue between speech enhancement automatic speech recognition systems, leading to significant reduction in the word error rate of a baseline automatic speech recognition system, provided by Intelligent Voice for research purposes. In conclusion, the results presented in this thesis show promising performance for the proposed architectures for real time speech enhancement applications
    corecore