520 research outputs found

    On the mechanism of response latencies in auditory nerve fibers

    Get PDF
    Despite the structural differences of the middle and inner ears, the latency pattern in auditory nerve fibers to an identical sound has been found similar across numerous species. Studies have shown the similarity in remarkable species with distinct cochleae or even without a basilar membrane. This stimulus-, neuron-, and species- independent similarity of latency cannot be simply explained by the concept of cochlear traveling waves that is generally accepted as the main cause of the neural latency pattern. An original concept of Fourier pattern is defined, intended to characterize a feature of temporal processing—specifically phase encoding—that is not readily apparent in more conventional analyses. The pattern is created by marking the first amplitude maximum for each sinusoid component of the stimulus, to encode phase information. The hypothesis is that the hearing organ serves as a running analyzer whose output reflects synchronization of auditory neural activity consistent with the Fourier pattern. A combined research of experimental, correlational and meta-analysis approaches is used to test the hypothesis. Manipulations included phase encoding and stimuli to test their effects on the predicted latency pattern. Animal studies in the literature using the same stimulus were then compared to determine the degree of relationship. The results show that each marking accounts for a large percentage of a corresponding peak latency in the peristimulus-time histogram. For each of the stimuli considered, the latency predicted by the Fourier pattern is highly correlated with the observed latency in the auditory nerve fiber of representative species. The results suggest that the hearing organ analyzes not only amplitude spectrum but also phase information in Fourier analysis, to distribute the specific spikes among auditory nerve fibers and within a single unit. This phase-encoding mechanism in Fourier analysis is proposed to be the common mechanism that, in the face of species differences in peripheral auditory hardware, accounts for the considerable similarities across species in their latency-by-frequency functions, in turn assuring optimal phase encoding across species. Also, the mechanism has the potential to improve phase encoding of cochlear implants

    AUDITORY TRAINING AT HOME FOR ADULT HEARING AID USERS

    Get PDF
    Research has shown that re-learning to understand speech in noise can be a difficult task for adults with hearing aids (HA). If HA users want to improve their speech understanding ability, specific training may be needed. Auditory training is one type of intervention that may enhance listening abilities for adult HA users. The purpose of this study was to examine the behavioral effects of an auditory training program called Listening and Communication Enhancement (LACE™) in the Digital Video Display (DVD) format in new and experienced HA users. No research to date has been conducted on the efficacy of this training program. An experimental, repeated measures group design was used. Twenty–six adults with hearing loss participated in this experiment and were assigned to one of three groups: New HA + training, Experienced HA + training or New HA – control. Participants in the training groups completed twenty, 30 minute training lessons from the LACE™ DVD program at home over a period of 4-weeks. Trained group participants were evaluated at baseline, after 2-weeks of training and again after 4- weeks of training. Participants in the control group were evaluated at baseline and after 4-weeks of HA use. Findings indicate that both new and experienced users improved their understanding of speech in noise after training and perception of communication function. Effect size calculations suggested that a larger training effect was observed for new HA users compared to experienced HA users. New HA users also reported greater benefit from training compared to experienced users. Auditory training with the LACE ™ DVD format should be encouraged, particularly among new HA users to improve understanding speech in noise

    Improvement of Speech Perception for Hearing-Impaired Listeners

    Get PDF
    Hearing impairment is becoming a prevalent health problem affecting 5% of world adult populations. Hearing aids and cochlear implant already play an essential role in helping patients over decades, but there are still several open problems that prevent them from providing the maximum benefits. Financial and discomfort reasons lead to only one of four patients choose to use hearing aids; Cochlear implant users always have trouble in understanding speech in a noisy environment. In this dissertation, we addressed the hearing aids limitations by proposing a new hearing aid signal processing system named Open-source Self-fitting Hearing Aids System (OS SF hearing aids). The proposed hearing aids system adopted the state-of-art digital signal processing technologies, combined with accurate hearing assessment and machine learning based self-fitting algorithm to further improve the speech perception and comfort for hearing aids users. Informal testing with hearing-impaired listeners showed that the testing results from the proposed system had less than 10 dB (by average) difference when compared with those results obtained from clinical audiometer. In addition, Sixteen-channel filter banks with adaptive differential microphone array provides up to six-dB SNR improvement in the noisy environment. Machine-learning based self-fitting algorithm provides more suitable hearing aids settings. To maximize cochlear implant users’ speech understanding in noise, the sequential (S) and parallel (P) coding strategies were proposed by integrating high-rate desynchronized pulse trains (DPT) in the continuous interleaved sampling (CIS) strategy. Ten participants with severe hearing loss participated in the two rounds cochlear implants testing. The testing results showed CIS-DPT-S strategy significantly improved (11%) the speech perception in background noise, while the CIS-DPT-P strategy had a significant improvement in both quiet (7%) and noisy (9%) environment

    Listening Effort:The hidden costs and benefits of cochlear implants

    Get PDF

    COMBINING CEPSTRAL NORMALIZATION AND COCHLEAR IMPLANT-LIKE SPEECH PROCESSING FOR MICROPHONE ARRAY-BASED SPEECH RECOGNITION

    Get PDF
    This paper investigates the combination of cepstral normalization and cochlear implant-like speech processing for microphone array- based speech recognition. Testing speech signals are recorded by a circular microphone array and are subsequently processed with superdirective beamforming and McCowan post-filtering. Training speech signals, from the multichannel overlapping Number corpus (MONC), are clean and not overlapping. Cochlear implant-like speech processing, which is inspired from the speech processing strategy in cochlear implants, is applied on the training and testing speech signals. Cepstral normalization, including cepstral mean and variance normalization (CMN and CVN), are applied on the training and testing cepstra. Experiments show that implementing either cepstral normalization or cochlear implant-like speech pro- cessing helps in reducing the WERs of microphone array-based speech recognition. Combining cepstral normalization and cochlear implant-like speech processing reduces further the WERs, when there is overlapping speech. Train/test mismatches are measured using the Kullback-Leibler divergence (KLD), between the global probability density functions (PDFs) of training and testing cepstral vectors. This measure reveals a train/test mismatch reduction when either cepstral normalization or cochlear implant-like speech pro- cessing is used. It reveals also that combining these two processing reduces further the train/test mismatches as well as the WERs

    Assistive Listening Devices in Primary and Secondary Educational Settings: A Systematic Review

    Get PDF
    The purpose of this study was to identify the scientific evidence available to support the use of assistive listening devices in primary and secondary educational settings. The American Speech-Language-Hearing Association (ASHA) makes it clear that it is the role of the speech-language pathologist (SLP) to modify the classroom environment, as needed, to enhance communicative abilities for this population (ASHA, 2016; Carney, 1998). Each journal article included in this study was published in a peer reviewed journal between the years of 2000 and 2018, written in the English language, and comprised of scientific information relevant to the research question proposed. Experimental studies included participants who were school aged children in a primary or secondary educational location. Results indicated that frequency modulation systems are a highly explored and supported mode of sound transmission, while scientific evidence exploring a variety of modes of configuration remains less conclusive

    Melodic contour identification and speech recognition by school-aged children

    Get PDF
    Using the Sung Speech Corpus (SSC), which encompasses a single database that contains musical pitch, timbre variations and speech information in identification tasks, the current study aimed to explore the development of normal-hearing children’s ability to use the pitch and timbre cues. Thirteen normal hearing children were recruited for the study ages ranging from 7 to 16 years old. Participants were separated into two separate groups: Younger (7-9) and Older (10-16). Musical Experience was taken into account as well. The Angel Sound ™ program was utilized for testing which was adopted from previous studies, most recently Crew, Galvin, and Fu (2015). Participants were asked to identify either pitch contour or a five word sentence while the one not being identified was manipulated in quiet. Each sentence recognition task was also tested at three different SNRs (-3, 0, 3 dB). For sentence recognition in quiet, children with musical training performed better than those without. A significant interaction between Age-Group and Musical Experience was also seen, such that Younger children showed more benefit from musical training than Older, musically trained children. Significant effect of pitch contour on sentence recognition in noise was found showing that naturally produced speech stimuli were easier to identify when competing background noise was introduced for all children than speech stimuli with an unnatural pitch contour. Significant effect of speech timbre on MCI was found which demonstrates that as the timbre complexity increases, the MCI performance decreases. The current study concluded that pitch and timbre cues interfered with each other in child listeners, depending on the listening demands (SNR, tasks, etc.). Music training can improve overall speech and music perception
    • …
    corecore