577 research outputs found
Neurocognitive factors in sensory restoration of early deafness: a connectome model
Progress in biomedical technology (cochlear, vestibular, and retinal implants) has led to remarkable success in neurosensory restoration, particularly in the auditory system. However, outcomes vary considerably, even after accounting for comorbidity-for example, after cochlear implantation, some deaf children develop spoken language skills approaching those of their hearing peers, whereas other children fail to do so. Here, we review evidence that auditory deprivation has widespread effects on brain development, affecting the capacity to process information beyond the auditory system. After sensory loss and deafness, the brain's effective connectivity is altered within the auditory system, between sensory systems, and between the auditory system and centres serving higher order neurocognitive functions. As a result, congenital sensory loss could be thought of as a connectome disease, with interindividual variability in the brain's adaptation to sensory loss underpinning much of the observed variation in outcome of cochlear implantation. Different executive functions, sequential processing, and concept formation are at particular risk in deaf children. A battery of clinical tests can allow early identification of neurocognitive risk factors. Intervention strategies that address these impairments with a personalised approach, taking interindividual variations into account, will further improve outcomes
Auf einem menschlichen Gehörmodell basierende Elektrodenstimulationsstrategie für Cochleaimplantate
Cochleaimplantate (CI), verbunden mit einer professionellen Rehabilitation,
haben mehreren hunderttausenden Hörgeschädigten die verbale Kommunikation
wieder ermöglicht. Betrachtet man jedoch die Rehabilitationserfolge, so
haben CI-Systeme inzwischen ihre Grenzen erreicht. Die Tatsache, dass die
meisten CI-Träger nicht in der Lage sind, Musik zu genießen oder einer
Konversation in geräuschvoller Umgebung zu folgen, zeigt, dass es noch Raum
fĂĽr Verbesserungen gibt.Diese Dissertation stellt die neue
CI-Signalverarbeitungsstrategie Stimulation based on Auditory Modeling
(SAM) vor, die vollständig auf einem Computermodell des menschlichen
peripheren Hörsystems beruht.Im Rahmen der vorliegenden Arbeit wurde die
SAM Strategie dreifach evaluiert: mit vereinfachten Wahrnehmungsmodellen
von CI-Nutzern, mit fünf CI-Nutzern, und mit 27 Normalhörenden mittels
eines akustischen Modells der CI-Wahrnehmung. Die Evaluationsergebnisse
wurden stets mit Ergebnissen, die durch die Verwendung der Advanced
Combination Encoder (ACE) Strategie ermittelt wurden, verglichen. ACE
stellt die zurzeit verbreitetste Strategie dar. Erste Simulationen zeigten,
dass die Sprachverständlichkeit mit SAM genauso gut wie mit ACE ist.
Weiterhin lieferte SAM genauere binaurale Merkmale, was potentiell zu einer
Verbesserung der Schallquellenlokalisierungfähigkeit führen kann. Die
Simulationen zeigten ebenfalls einen erhöhten Anteil an zeitlichen
Pitchinformationen, welche von SAM bereitgestellt wurden. Die Ergebnisse
der nachfolgenden Pilotstudie mit fĂĽnf CI-Nutzern zeigten mehrere Vorteile
von SAM auf. Erstens war eine signifikante Verbesserung der
Tonhöhenunterscheidung bei Sinustönen und gesungenen Vokalen zu erkennen.
Zweitens bestätigten CI-Nutzer, die kontralateral mit einem Hörgerät
versorgt waren, eine natĂĽrlicheren Klangeindruck. Als ein sehr bedeutender
Vorteil stellte sich drittens heraus, dass sich alle Testpersonen in sehr
kurzer Zeit (ca. 10 bis 30 Minuten) an SAM gewöhnen konnten. Dies ist
besonders wichtig, da typischerweise Wochen oder Monate nötig sind. Tests
mit Normalhörenden lieferten weitere Nachweise für die verbesserte
Tonhöhenunterscheidung mit SAM.Obwohl SAM noch keine marktreife Alternative
ist, versucht sie den Weg für zukünftige Strategien, die auf Gehörmodellen
beruhen, zu ebnen und ist somit ein erfolgversprechender Kandidat fĂĽr
weitere Forschungsarbeiten.Cochlear implants (CIs) combined with professional rehabilitation have
enabled several hundreds of thousands of hearing-impaired individuals to
re-enter the world of verbal communication. Though very successful, current
CI systems seem to have reached their peak potential. The fact that most
recipients claim not to enjoy listening to music and are not capable of
carrying on a conversation in noisy or reverberative environments shows
that there is still room for improvement.This dissertation presents a new
cochlear implant signal processing strategy called Stimulation based on
Auditory Modeling (SAM), which is completely based on a computational model
of the human peripheral auditory system.SAM has been evaluated through
simplified models of CI listeners, with five cochlear implant users, and
with 27 normal-hearing subjects using an acoustic model of CI perception.
Results have always been compared to those acquired using Advanced
Combination Encoder (ACE), which is today’s most prevalent CI strategy.
First simulations showed that speech intelligibility of CI users fitted
with SAM should be just as good as that of CI listeners fitted with ACE.
Furthermore, it has been shown that SAM provides more accurate binaural
cues, which can potentially enhance the sound source localization ability
of bilaterally fitted implantees. Simulations have also revealed an
increased amount of temporal pitch information provided by SAM. The
subsequent pilot study, which ran smoothly, revealed several benefits of
using SAM. First, there was a significant improvement in pitch
discrimination of pure tones and sung vowels. Second, CI users fitted with
a contralateral hearing aid reported a more natural sound of both speech
and music. Third, all subjects were accustomed to SAM in a very short
period of time (in the order of 10 to 30 minutes), which is particularly
important given that a successful CI strategy change typically takes weeks
to months. An additional test with 27 normal-hearing listeners using an
acoustic model of CI perception delivered further evidence for improved
pitch discrimination ability with SAM as compared to ACE.Although SAM is
not yet a market-ready alternative, it strives to pave the way for future
strategies based on auditory models and it is a promising candidate for
further research and investigation
Evaluation of the sparse coding shrinkage noise reduction algorithm for the hearing impaired
Although there are numerous single-channel noise reduction strategies to improve speech perception in a noisy environment, most of them can only improve speech quality but not improve speech intelligibility for normal hearing (NH) or hearing impaired (HI) listeners. Exceptions that can improve speech intelligibility currently are only those that require a priori statistics of speech or noise. Most of the noise reduction algorithms in hearing aids are adopted directly from the algorithms for NH listeners without taking into account of the hearing loss factors within HI listeners. HI listeners suffer more in speech intelligibility than NH listeners in the same noisy environment. Further study of monaural noise reduction algorithms for HI listeners is required.The motivation is to adapt a model-based approach in contrast to the conventional Wiener filtering approach. The model-based algorithm called sparse coding shrinkage (SCS) was proposed to extract key speech information from noisy speech. The SCS algorithm was evaluated by comparison with another state-of-the-art Wiener filtering approach through speech intelligibility and quality tests using 9 NH and 9 HI listeners. The SCS algorithm matched the performance of the Wiener filtering algorithm in speech intelligibility and speech quality. Both algorithms showed some intelligibility improvements for HI listeners but not at all for NH listeners. The algorithms improved speech quality for both HI and NH listeners.Additionally, a physiologically-inspired hearing loss simulation (HLS) model was developed to characterize hearing loss factors and simulate hearing loss consequences. A methodology was proposed to evaluate signal processing strategies for HI listeners with the proposed HLS model and NH subjects. The corresponding experiment was performed by asking NH subjects to listen to unprocessed/enhanced speech with the HLS model. Some of the effects of the algorithms seen in HI listeners are reproduced, at least qualitatively, by using the HLS model with NH listeners.Conclusions: The model-based algorithm SCS is promising for improving performance in stationary noise although no clear difference was seen in the performance of SCS and a competitive Wiener filtering algorithm. Fluctuating noise is more difficult to reduce compared to stationary noise. Noise reduction algorithms may perform better at higher input signal-to-noise ratios (SNRs) where HI listeners can get benefit but where NH listeners already reach ceiling performance. The proposed HLS model can save time and cost when evaluating noise reduction algorithms for HI listeners
Recommended from our members
The effect of a coding strategy that removes temporally masked pulses on speech perception by cochlear implant users.
Speech recognition in noisy environments remains a challenge for cochlear implant (CI) recipients. Unwanted charge interactions between current pulses, both within and between electrode channels, are likely to impair performance. Here we investigate the effect of reducing the number of current pulses on speech perception. This was achieved by implementing a psychoacoustic temporal-masking model where current pulses in each channel were passed through a temporal integrator to identify and remove pulses that were less likely to be perceived by the recipient. The decision criterion of the temporal integrator was varied to control the percentage of pulses removed in each condition. In experiment 1, speech in quiet was processed with a standard Continuous Interleaved Sampling (CIS) strategy and with 25, 50 and 75% of pulses removed. In experiment 2, performance was measured for speech in noise with the CIS reference and with 50 and 75% of pulses removed. Speech intelligibility in quiet revealed no significant difference between reference and test conditions. For speech in noise, results showed a significant improvement of 2.4Â dB when removing 50% of pulses and performance was not significantly different between the reference and when 75% of pulses were removed. Further, by reducing the overall amount of current pulses by 25, 50, and 75% but accounting for the increase in charge necessary to compensate for the decrease in loudness, estimated average power savings of 21.15, 40.95, and 63.45%, respectively, could be possible for this set of listeners. In conclusion, removing temporally masked pulses may improve speech perception in noise and result in substantial power savings
A Binaural Cochlear Implant Sound Coding Strategy Inspired by the Contralateral Medial Olivocochlear Reflex
[EN] Objectives: In natural hearing, cochlear mechanical compression is dynamically adjusted via the efferent medial olivocochlear reflex (MOCR). These adjustments probably help understanding speech in noisy environments and are not available to the users of current cochlear implants (CIs). The aims of the present study are to: (1) present a binaural CI sound processing strategy inspired by the control of cochlear compression provided by the contralateral MOCR in natural hearing; and (2) assess the benefits of the new strategy for understanding speech presented in competition with steady noise with a speech-like spectrum in various spatial configurations of the speech and noise sources.
Design: Pairs of CI sound processors (one per ear) were constructed to mimic or not mimic the effects of the contralateral MOCR on compression. For the nonmimicking condition (standard strategy or STD), the two processors in a pair functioned similarly to standard clinical processors (i.e., with fixed back-end compression and independently of each other). When configured to mimic the effects of the MOCR (MOC strategy), the two processors communicated with each other and the amount of backend compression in a given frequency channel of each processor in the pair decreased/increased dynamically (so that output levels dropped/ increased) with increases/decreases in the output energy from the corresponding
frequency channel in the contralateral processor. Speech reception thresholds in speech-shaped noise were measured for 3 bilateral CI users and 2 single-sided deaf unilateral CI users. Thresholds were compared for the STD and MOC strategies in unilateral and bilateral listening conditions and for three spatial configurations of the speech and noise sources in simulated free-field conditions: speech and noise sources colocated in front of the listener, speech on the left ear with noise in front of the listener, and speech on the left ear with noise on the right ear. In both bilateral and unilateral listening, the electrical stimulus delivered to the test ear(s) was always calculated as if the listeners were wearing bilateral processors.
Results: In both unilateral and bilateral listening conditions, mean speech reception thresholds were comparable with the two strategies for colocated speech and noise sources, but were at least 2 dB lower (better) with the MOC than with the STD strategy for spatially separated speech and noise sources. In unilateral listening conditions, mean thresholds improved with increasing the spatial separation between the speech and noise sources regardless of the strategy but the improvement was significantly greater with the MOC strategy. In bilateral listening conditions, thresholds improved significantly with increasing the speech-noise spatial separation only with the MOC strategy.
Conclusions: The MOC strategy (1) significantly improved the intelligibility of speech presented in competition with a spatially separated noise source, both in unilateral and bilateral listening conditions; (2) produced significant spatial release from masking in bilateral listening conditions, something that did not occur with fixed compression; and (3) enhanced spatial release from masking in unilateral listening conditions. The MOC strategy as implemented here, or a modified version of it, may be usefully applied in CIs and in hearing aids
On the mechanism of response latencies in auditory nerve fibers
Despite the structural differences of the middle and inner ears, the latency pattern in auditory nerve fibers to an identical sound has been found similar across numerous species. Studies have shown the similarity in remarkable species with distinct cochleae or even without a basilar membrane. This stimulus-, neuron-, and species- independent similarity of latency cannot be simply explained by the concept of cochlear traveling waves that is generally accepted as the main cause of the neural latency pattern.
An original concept of Fourier pattern is defined, intended to characterize a feature of temporal processing—specifically phase encoding—that is not readily apparent in more conventional analyses. The pattern is created by marking the first amplitude maximum for each sinusoid component of the stimulus, to encode phase information. The hypothesis is that the hearing organ serves as a running analyzer whose output reflects synchronization of auditory neural activity consistent with the Fourier pattern.
A combined research of experimental, correlational and meta-analysis approaches is used to test the hypothesis. Manipulations included phase encoding and stimuli to test their effects on the predicted latency pattern. Animal studies in the literature using the same stimulus were then compared to determine the degree of relationship.
The results show that each marking accounts for a large percentage of a corresponding peak latency in the peristimulus-time histogram. For each of the stimuli considered, the latency predicted by the Fourier pattern is highly correlated with the observed latency in the auditory nerve fiber of representative species.
The results suggest that the hearing organ analyzes not only amplitude spectrum but also phase information in Fourier analysis, to distribute the specific spikes among auditory nerve fibers and within a single unit.
This phase-encoding mechanism in Fourier analysis is proposed to be the common mechanism that, in the face of species differences in peripheral auditory hardware, accounts for the considerable similarities across species in their latency-by-frequency functions, in turn assuring optimal phase encoding across species. Also, the mechanism has the potential to improve phase encoding of cochlear implants
Hearing the light: neural and perceptual encoding of optogenetic stimulation in the central auditory pathway
Optogenetics provides a means to dissect the organization and function of neural circuits. Optogenetics also offers the translational promise of restoring sensation, enabling movement or supplanting abnormal activity patterns in pathological brain circuits. However, the inherent sluggishness of evoked photocurrents in conventional channelrhodopsins has hampered the development of optoprostheses that adequately mimic the rate and timing of natural spike patterning. Here, we explore the feasibility and limitations of a central auditory optoprosthesis by photoactivating mouse auditory midbrain neurons that either express channelrhodopsin-2 (ChR2) or Chronos, a channelrhodopsin with ultra-fast channel kinetics. Chronos-mediated spike fidelity surpassed ChR2 and natural acoustic stimulation to support a superior code for the detection and discrimination of rapid pulse trains. Interestingly, this midbrain coding advantage did not translate to a perceptual advantage, as behavioral detection of midbrain activation was equivalent with both opsins. Auditory cortex recordings revealed that the precisely synchronized midbrain responses had been converted to a simplified rate code that was indistinguishable between opsins and less robust overall than acoustic stimulation. These findings demonstrate the temporal coding benefits that can be realized with next-generation channelrhodopsins, but also highlight the challenge of inducing variegated patterns of forebrain spiking activity that support adaptive perception and behavior
The Effect of Retrieval Practice on Vocabulary Learning for Children who are Deaf or Hard of Hearing
The goal of the current study was to determine if students who are deaf or hard of hearing (d/hh) would learn more new vocabulary words through the use of retrieval practice than repeated exposure (repeated study). No studies to date have used this cognitive strategy—retrieval practice—with children who are d/hh. Previous studies have shown that children with hearing loss struggle with learning vocabulary words. This deficit can negatively affect language development, reading outcomes, and overall academic success. Few studies have investigated specific interventions to address the poor vocabulary development for children with hearing loss. The current study investigated retrieval practice as a potentially effective strategy to increase word-learning for children who are d/hh and who use spoken language. It was found that children with hearing loss recalled a greater number of new vocabulary words when using retrieval practice than repeated exposure after a two day retention interval. This study also examined factors that influence whether a child remembers or forgets a word after a retention interval. Children who did not have an additional diagnosis recalled more words than children with an additional diagnosis. Children who were more efficient learners—took fewer trials to learn the word—recalled more words than children who were less efficient learners. High level of parent education and aided speech perception scores were not significant predictors of the children remembering the new words. In summary, this study was the first to show that retrieval practice caused students with hearing loss to learn more new vocabulary words than repeated exposure
- …