6,653 research outputs found

    Predicting Speech Recognition using the Speech Intelligibility Index (SII) for Cochlear Implant Users and Listeners with Normal Hearing

    Get PDF
    Although the AzBio test is well validated, has effective standardization data available, and is highly recommended for Cochlear Implant (CI) evaluation, no attempt has been made to derive a Frequency Importance Function (FIF) for its stimuli. In the first phase of this dissertation, we derived FIFs for the AzBio sentence lists using listeners with normal hearing. Traditional procedures described in studies by Studebaker and Sherbecoe (1991) were applied for this purpose. Fifteen participants with normal hearing listened to a large number of AzBio sentences that were high- and low-pass filtered under speech-spectrum shaped noise at various signal-to-noise ratios. Frequency weights for the AzBio sentences were greatest in the 1.5 to 2 kHz frequency regions as is the case with other speech materials. A cross-procedure comparison was conducted between the traditional procedure (Studebaker and Sherbecoe, 1991) and the nonlinear optimization procedure (Kates, 2013). Consecutive data analyses provided speech recognition scores for the AzBio sentences in relation to the Speech Intelligibility Index (SII). Our findings provided empirically derived FIFs for the AzBio test that can be used for future studies. It is anticipated that the accuracy of predicting SIIs for CI patients will be improved when using these derived FIFs for the AzBio test. In the second study, the SIIfor CIrecipients was calculated to investigate whether the SII is an effective tool for predicting speech perception performance in a CI population. A total of fifteen CI adults participated. The FIFs obtained from the first study were used to compute the SII in these CI listeners. The obtained SIIs were compared with predicted SIIs using a transfer function curve derived from the first study. Due to the considerably poor hearing and large individual variability in performance in the CI population, the SII failed to predict speech perception performance for this CI group. Other predictive factors that have been associated with speech perception performance were also examined using a multiple regression analysis. Gap detection thresholds and duration of deafness were found to be significant predictive factors. These predictor factors and SIIs are discussed in relation to speech perception performance in CI users

    Predicting Speech Intelligibility

    Get PDF
    Hearing impairment, and specifically sensorineural hearing loss, is an increasingly prevalent condition, especially amongst the ageing population. It occurs primarily as a result of damage to hair cells that act as sound receptors in the inner ear and causes a variety of hearing perception problems, most notably a reduction in speech intelligibility. Accurate diagnosis of hearing impairments is a time consuming process and is complicated by the reliance on indirect measurements based on patient feedback due to the inaccessible nature of the inner ear. The challenges of designing hearing aids to counteract sensorineural hearing losses are further compounded by the wide range of severities and symptoms experienced by hearing impaired listeners. Computer models of the auditory periphery have been developed, based on phenomenological measurements from auditory-nerve fibres using a range of test sounds and varied conditions. It has been demonstrated that auditory-nerve representations of vowels in normal and noisedamaged ears can be ranked by a subjective visual inspection of how the impaired representations differ from the normal. This thesis seeks to expand on this procedure to use full word tests rather than single vowels, and to replace manual inspection with an automated approach using a quantitative measure. It presents a measure that can predict speech intelligibility in a consistent and reproducible manner. This new approach has practical applications as it could allow speechprocessing algorithms for hearing aids to be objectively tested in early stage development without having to resort to extensive human trials. Simulated hearing tests were carried out by substituting real listeners with the auditory model. A range of signal processing techniques were used to measure the model’s auditory-nerve outputs by presenting them spectro-temporally as neurograms. A neurogram similarity index measure (NSIM) was developed that allowed the impaired outputs to be compared to a reference output from a normal hearing listener simulation. A simulated listener test was developed, using standard listener test material, and was validated for predicting normal hearing speech intelligibility in quiet and noisy conditions. Two types of neurograms were assessed: temporal fine structure (TFS) which retained spike timing information; and average discharge rate or temporal envelope (ENV). Tests were carried out to simulate a wide range of sensorineural hearing losses and the results were compared to real listeners’ unaided and aided performance. Simulations to predict speech intelligibility performance of NAL-RP and DSL 4.0 hearing aid fitting algorithms were undertaken. The NAL-RP hearing aid fitting algorithm was adapted using a chimaera sound algorithm which aimed to improve the TFS speech cues available to aided hearing impaired listeners. NSIM was shown to quantitatively rank neurograms with better performance than a relative mean squared error and other similar metrics. Simulated performance intensity functions predicted speech intelligibility for normal and hearing impaired listeners. The simulated listener tests demonstrated that NAL-RP and DSL 4.0 performed with similar speech intelligibility restoration levels. Using NSIM and a computational model of the auditory periphery, speech intelligibility can be predicted for both normal and hearing impaired listeners and novel hearing aids can be rapidly prototyped and evaluated prior to real listener tests

    Aerospace Medicine and Biology: A continuing bibliography with indexes (supplement 141)

    Get PDF
    This special bibliography lists 267 reports, articles, and other documents introduced into the NASA scientific and technical information system in April 1975

    Engineering data compendium. Human perception and performance. User's guide

    Get PDF
    The concept underlying the Engineering Data Compendium was the product of a research and development program (Integrated Perceptual Information for Designers project) aimed at facilitating the application of basic research findings in human performance to the design and military crew systems. The principal objective was to develop a workable strategy for: (1) identifying and distilling information of potential value to system design from the existing research literature, and (2) presenting this technical information in a way that would aid its accessibility, interpretability, and applicability by systems designers. The present four volumes of the Engineering Data Compendium represent the first implementation of this strategy. This is the first volume, the User's Guide, containing a description of the program and instructions for its use

    Word Recognition and Learning: Effects of Hearing Loss and Amplification Feature

    Get PDF
    abstract: Two amplification features were examined using auditory tasks that varied in stimulus familiarity. It was expected that the benefits of certain amplification features would increase as the familiarity with the stimuli decreased. A total of 20 children and 15 adults with normal hearing as well as 21 children and 17 adults with mild to severe hearing loss participated. Three models of ear-level devices were selected based on the quality of the high-frequency amplification or the digital noise reduction (DNR) they provided. The devices were fitted to each participant and used during testing only. Participants completed three tasks: (a) word recognition, (b) repetition and lexical decision of real and nonsense words, and (c) novel word learning. Performance improved significantly with amplification for both the children and the adults with hearing loss. Performance improved further with wideband amplification for the children more than for the adults. In steady-state noise and multitalker babble, performance decreased for both groups with little to no benefit from amplification or from the use of DNR. When compared with the listeners with normal hearing, significantly poorer performance was observed for both the children and adults with hearing loss on all tasks with few exceptions. Finally, analysis of across-task performance confirmed the hypothesis that benefit increased as the familiarity of the stimuli decreased for wideband amplification but not for DNR. However, users who prefer DNR for listening comfort are not likely to jeopardize their ability to detect and learn new information when using this feature.The final version of this article, as published in Trends in Hearing, can be viewed online at: http://journals.sagepub.com/doi/10.1177/233121651770959

    Stereo hearing with unilateral bone conduction amplification

    Get PDF
    Abstract Conductive hearing loss results when the neural integrity of the auditory system is healthy, but sound is prevented from reaching the cochlea in its entirety. Unilateral Congenital Aural Atresia (UCAA) is a birth defect in which there is no external ear canal, resulting in the reduction of sound able to reach the middle ear. Two primary options for correcting this conductive hearing loss are canalplasty or a bone anchored hearing device (BAHD). We want to compare the benefit level from these options, specifically in two conditions: sound localization and the ability to detect speech from one ear while there is competing background noise presented to the other ear. While canalplasty has been well studied, there is little research available on whether a unilateral bone conduction implant will provide any benefit in these binaural tasks. The purpose of this study is to determine the effect BAHD use has on localization and speech in noise understanding, so that audiologists and ENTs can advise patients of their treatment options. A stereo computer and semi-circular speaker setup was used to determine sound-localization accuracy of the participants by having them select which speaker they thought the signal noise was being presented from. Performance was quantified through percent correct and root mean squared of error in degrees azimuth. Speech in noise understanding was assessed through four different test conditions in which the participant chose the color and number spoken by a randomized recording, while competing noise played from the opposite hemifield. Data were analyzed in terms of signal-to-noise ratio. Two separate studies were designed for this dissertation. In the single-subject design, one participant had asymmetrical conductive hearing loss and took both tests twice a day, alternating BAHD use daily, for a total of six days. In the multi-subject design, six patients with UCAA each took both tests while unaided, and then again with their BAHD activated. Results showed that while BAHD use does not produce significant benefits in localization or speech in noise comprehension for all users, the unaided thresholds for asymmetry of hearing and air-bone gaps (ABGs) are predictive of whether an individual will benefit from implantation or not in these tasks. More specifically, if pre-implantation thresholds are poor (~\u3e44dB), then activation of the BAHD improves these two aspects of binaural processing; conversely, with relatively minor asymmetry BAHD activation makes binaural processing worse

    Interactions between Cognition and Hearing Aid Compression Release Time: Effects of Linguistic Context of Speech Test Materials on Speech-In-Noise Performance

    Get PDF
    Difference in speech recognition performance with short and long release time processing has been noted in previous research. Recent research has established a connection between hearing aid users\u27 cognitive abilities and release time. Researchers hope to use cognitive ability as a predictor of release time selection. The results from these previous studies have been contradictory. Some researchers hypothesized that linguistic context of speech recognition test materials was one of the factors that accounted for the inconsistency. The goal of the present study was to examine the relationship between hearing aid users\u27 cognitive abilities and their aided speech recognition performance with short and long release time using speech recognition tests with different amounts of linguistic context. Thirty-four experienced hearing aid users participated in the present study. Their cognitive abilities were quantified using a reading span test. Digital behind-the-ear style hearing aids with adjustable release time settings were bilaterally fitted to the participants. Their aided speech recognition performance was evaluated using three tests with different amounts of linguistic context: the Word-In-Noise (WIN) test, the American Four Alternative Auditory Feature (AFAAF) test, and the Bamford-Kowal_Bench Speech-In-Noise (BKB-SIN) test. The present study replicated the results of an earlier study using an equivalent speech recognition test. The results from the present study also showed that hearing aid users with high cognitive abilities performed better on the AFAAF and the BKB-SIN compared to those with low cognitive abilities when using short release time processing. Results showed that none of the speech recognition tests produced significantly different performance between the short and the long release times for either cognitive group. This finding did not support the hypothesis of the effect of linguistic context on aided speech recognition performance with different release time settings. Results from the present study suggest that cognitive ability might not be important in prescribing release time

    Auf einem menschlichen Gehörmodell basierende Elektrodenstimulationsstrategie für Cochleaimplantate

    Get PDF
    Cochleaimplantate (CI), verbunden mit einer professionellen Rehabilitation, haben mehreren hunderttausenden Hörgeschädigten die verbale Kommunikation wieder ermöglicht. Betrachtet man jedoch die Rehabilitationserfolge, so haben CI-Systeme inzwischen ihre Grenzen erreicht. Die Tatsache, dass die meisten CI-Träger nicht in der Lage sind, Musik zu genießen oder einer Konversation in geräuschvoller Umgebung zu folgen, zeigt, dass es noch Raum für Verbesserungen gibt.Diese Dissertation stellt die neue CI-Signalverarbeitungsstrategie Stimulation based on Auditory Modeling (SAM) vor, die vollständig auf einem Computermodell des menschlichen peripheren Hörsystems beruht.Im Rahmen der vorliegenden Arbeit wurde die SAM Strategie dreifach evaluiert: mit vereinfachten Wahrnehmungsmodellen von CI-Nutzern, mit fünf CI-Nutzern, und mit 27 Normalhörenden mittels eines akustischen Modells der CI-Wahrnehmung. Die Evaluationsergebnisse wurden stets mit Ergebnissen, die durch die Verwendung der Advanced Combination Encoder (ACE) Strategie ermittelt wurden, verglichen. ACE stellt die zurzeit verbreitetste Strategie dar. Erste Simulationen zeigten, dass die Sprachverständlichkeit mit SAM genauso gut wie mit ACE ist. Weiterhin lieferte SAM genauere binaurale Merkmale, was potentiell zu einer Verbesserung der Schallquellenlokalisierungfähigkeit führen kann. Die Simulationen zeigten ebenfalls einen erhöhten Anteil an zeitlichen Pitchinformationen, welche von SAM bereitgestellt wurden. Die Ergebnisse der nachfolgenden Pilotstudie mit fünf CI-Nutzern zeigten mehrere Vorteile von SAM auf. Erstens war eine signifikante Verbesserung der Tonhöhenunterscheidung bei Sinustönen und gesungenen Vokalen zu erkennen. Zweitens bestätigten CI-Nutzer, die kontralateral mit einem Hörgerät versorgt waren, eine natürlicheren Klangeindruck. Als ein sehr bedeutender Vorteil stellte sich drittens heraus, dass sich alle Testpersonen in sehr kurzer Zeit (ca. 10 bis 30 Minuten) an SAM gewöhnen konnten. Dies ist besonders wichtig, da typischerweise Wochen oder Monate nötig sind. Tests mit Normalhörenden lieferten weitere Nachweise für die verbesserte Tonhöhenunterscheidung mit SAM.Obwohl SAM noch keine marktreife Alternative ist, versucht sie den Weg für zukünftige Strategien, die auf Gehörmodellen beruhen, zu ebnen und ist somit ein erfolgversprechender Kandidat für weitere Forschungsarbeiten.Cochlear implants (CIs) combined with professional rehabilitation have enabled several hundreds of thousands of hearing-impaired individuals to re-enter the world of verbal communication. Though very successful, current CI systems seem to have reached their peak potential. The fact that most recipients claim not to enjoy listening to music and are not capable of carrying on a conversation in noisy or reverberative environments shows that there is still room for improvement.This dissertation presents a new cochlear implant signal processing strategy called Stimulation based on Auditory Modeling (SAM), which is completely based on a computational model of the human peripheral auditory system.SAM has been evaluated through simplified models of CI listeners, with five cochlear implant users, and with 27 normal-hearing subjects using an acoustic model of CI perception. Results have always been compared to those acquired using Advanced Combination Encoder (ACE), which is today’s most prevalent CI strategy. First simulations showed that speech intelligibility of CI users fitted with SAM should be just as good as that of CI listeners fitted with ACE. Furthermore, it has been shown that SAM provides more accurate binaural cues, which can potentially enhance the sound source localization ability of bilaterally fitted implantees. Simulations have also revealed an increased amount of temporal pitch information provided by SAM. The subsequent pilot study, which ran smoothly, revealed several benefits of using SAM. First, there was a significant improvement in pitch discrimination of pure tones and sung vowels. Second, CI users fitted with a contralateral hearing aid reported a more natural sound of both speech and music. Third, all subjects were accustomed to SAM in a very short period of time (in the order of 10 to 30 minutes), which is particularly important given that a successful CI strategy change typically takes weeks to months. An additional test with 27 normal-hearing listeners using an acoustic model of CI perception delivered further evidence for improved pitch discrimination ability with SAM as compared to ACE.Although SAM is not yet a market-ready alternative, it strives to pave the way for future strategies based on auditory models and it is a promising candidate for further research and investigation

    Talker Differences and Gender Effects in Audio-Visual Speech Perception

    Get PDF
    Listeners integrate auditory and visual cues in perception of speech when communicating in both normal and compromised listening environments. Three factors affect the success of interaction in communication situations: characteristics of the talker, characteristics of the listener, and characteristics of the speech signal itself. In everyday life, individuals must comprehend speech produced by many different talkers. Little, however, is known about the characteristics of talkers that make them more intelligible and that best facilitate audio- visual integration. In the present study, 10 adult listeners, with normal or corrected-to-normal vision and auditory thresholds at or better than 25 dB HL across all frequencies, were presented with everyday sentences produced by eight different talkers selected from a commercially available software package (HeLPs, Sensimetrics, Inc.). Sentences were presented under audio-only, visual-only, and audio + visual modalities. Talkers varied widely in gender, age, and ethnicity. Auditory input was degraded to approximate a sloping hearing loss (55 dB HL at 1000 Hz). Results showed significant differences across talkers, but no males were more intelligible in auditory-only presentation, whereas females were more intelligible under visual-only and audio+visual presentation. These results provide new insights for the design of oral rehabilitation programs for hearing-impaired persons.This project was supported by an SBS Undergraduate Research grant.No embargoAcademic Major: Speech and Hearing Scienc
    corecore