82 research outputs found

    Biophysical modeling of a cochlear implant system: progress on closed-loop design using a novel patient-specific evaluation platform

    Get PDF
    The modern cochlear implant is one of the most successful neural stimulation devices, which partially mimics the workings of the auditory periphery. In the last few decades it has created a paradigm shift in hearing restoration of the deaf population, which has led to more than 324,000 cochlear implant users today. Despite its great success there is great disparity in patient outcomes without clear understanding of the aetiology of this variance in implant performance. Furthermore speech recognition in adverse conditions or music appreciation is still not attainable with today's commercial technology. This motivates the research for the next generation of cochlear implants that takes advantage of recent developments in electronics, neuroscience, nanotechnology, micro-mechanics, polymer chemistry and molecular biology to deliver high fidelity sound. The main difficulties in determining the root of the problem in the cases where the cochlear implant does not perform well are two fold: first there is not a clear paradigm on how the electrical stimulation is perceived as sound by the brain, and second there is limited understanding on the plasticity effects, or learning, of the brain in response to electrical stimulation. These significant knowledge limitations impede the design of novel cochlear implant technologies, as the technical specifications that can lead to better performing implants remain undefined. The motivation of the work presented in this thesis is to compare and contrast the cochlear implant neural stimulation with the operation of the physiological healthy auditory periphery up to the level of the auditory nerve. As such design of novel cochlear implant systems can become feasible by gaining insight on the question `how well does a specific cochlear implant system approximate the healthy auditory periphery?' circumventing the necessity of complete understanding of the brain's comprehension of patterned electrical stimulation delivered from a generic cochlear implant device. A computational model, termed Digital Cochlea Stimulation and Evaluation Tool (‘DiCoStET’) has been developed to provide an objective estimate of cochlear implant performance based on neuronal activation measures, such as vector strength and average activation. A patient-specific cochlea 3D geometry is generated using a model derived by a single anatomical measurement from a patient, using non-invasive high resolution computed tomography (HRCT), and anatomically invariant human metrics and relations. Human measurements of the neuron route within the inner ear enable an innervation pattern to be modelled which joins the space from the organ of Corti to the spiral ganglion subsequently descending into the auditory nerve bundle. An electrode is inserted in the cochlea at a depth that is determined by the user of the tool. The geometric relation between the stimulation sites on the electrode and the spiral ganglion are used to estimate an activating function that will be unique for the specific patient's cochlear shape and electrode placement. This `transfer function', so to speak, between electrode and spiral ganglion serves as a `digital patient' for validating novel cochlear implant systems. The novel computational tool is intended for use by bioengineers, surgeons, audiologists and neuroscientists alike. In addition to ‘DiCoStET’ a second computational model is presented in this thesis aiming at enhancing the understanding of the physiological mechanisms of hearing, specifically the workings of the auditory synapse. The purpose of this model is to provide insight on the sound encoding mechanisms of the synapse. A hypothetical mechanism is suggested in the release of neurotransmitter vesicles that permits the auditory synapse to encode temporal patterns of sound separately from sound intensity. DiCoStET was used to examine the performance of two different types of filters used for spectral analysis in the cochlear implant system, the Gammatone type filter and the Butterworth type filter. The model outputs suggest that the Gammatone type filter performs better than the Butterworth type filter. Furthermore two stimulation strategies, the Continuous Interleaved Stimulation (CIS) and Asynchronous Interleaved Stimulation (AIS) have been compared. The estimated neuronal stimulation spatiotemporal patterns for each strategy suggest that the overall stimulation pattern is not greatly affected by the temporal sequence change. However the finer detail of neuronal activation is different between the two strategies, and when compared to healthy neuronal activation patterns the conjecture is made that the sequential stimulation of CIS hinders the transmission of sound fine structure information to the brain. The effect of the two models developed is the feasibility of collaborative work emanating from various disciplines; especially electrical engineering, auditory physiology and neuroscience for the development of novel cochlear implant systems. This is achieved by using the concept of a `digital patient' whose artificial neuronal activation is compared to a healthy scenario in a computationally efficient manner to allow practical simulation times.Open Acces

    Auf einem menschlichen Gehörmodell basierende Elektrodenstimulationsstrategie für Cochleaimplantate

    Get PDF
    Cochleaimplantate (CI), verbunden mit einer professionellen Rehabilitation, haben mehreren hunderttausenden Hörgeschädigten die verbale Kommunikation wieder ermöglicht. Betrachtet man jedoch die Rehabilitationserfolge, so haben CI-Systeme inzwischen ihre Grenzen erreicht. Die Tatsache, dass die meisten CI-Träger nicht in der Lage sind, Musik zu genießen oder einer Konversation in geräuschvoller Umgebung zu folgen, zeigt, dass es noch Raum für Verbesserungen gibt.Diese Dissertation stellt die neue CI-Signalverarbeitungsstrategie Stimulation based on Auditory Modeling (SAM) vor, die vollständig auf einem Computermodell des menschlichen peripheren Hörsystems beruht.Im Rahmen der vorliegenden Arbeit wurde die SAM Strategie dreifach evaluiert: mit vereinfachten Wahrnehmungsmodellen von CI-Nutzern, mit fünf CI-Nutzern, und mit 27 Normalhörenden mittels eines akustischen Modells der CI-Wahrnehmung. Die Evaluationsergebnisse wurden stets mit Ergebnissen, die durch die Verwendung der Advanced Combination Encoder (ACE) Strategie ermittelt wurden, verglichen. ACE stellt die zurzeit verbreitetste Strategie dar. Erste Simulationen zeigten, dass die Sprachverständlichkeit mit SAM genauso gut wie mit ACE ist. Weiterhin lieferte SAM genauere binaurale Merkmale, was potentiell zu einer Verbesserung der Schallquellenlokalisierungfähigkeit führen kann. Die Simulationen zeigten ebenfalls einen erhöhten Anteil an zeitlichen Pitchinformationen, welche von SAM bereitgestellt wurden. Die Ergebnisse der nachfolgenden Pilotstudie mit fünf CI-Nutzern zeigten mehrere Vorteile von SAM auf. Erstens war eine signifikante Verbesserung der Tonhöhenunterscheidung bei Sinustönen und gesungenen Vokalen zu erkennen. Zweitens bestätigten CI-Nutzer, die kontralateral mit einem Hörgerät versorgt waren, eine natürlicheren Klangeindruck. Als ein sehr bedeutender Vorteil stellte sich drittens heraus, dass sich alle Testpersonen in sehr kurzer Zeit (ca. 10 bis 30 Minuten) an SAM gewöhnen konnten. Dies ist besonders wichtig, da typischerweise Wochen oder Monate nötig sind. Tests mit Normalhörenden lieferten weitere Nachweise für die verbesserte Tonhöhenunterscheidung mit SAM.Obwohl SAM noch keine marktreife Alternative ist, versucht sie den Weg für zukünftige Strategien, die auf Gehörmodellen beruhen, zu ebnen und ist somit ein erfolgversprechender Kandidat für weitere Forschungsarbeiten.Cochlear implants (CIs) combined with professional rehabilitation have enabled several hundreds of thousands of hearing-impaired individuals to re-enter the world of verbal communication. Though very successful, current CI systems seem to have reached their peak potential. The fact that most recipients claim not to enjoy listening to music and are not capable of carrying on a conversation in noisy or reverberative environments shows that there is still room for improvement.This dissertation presents a new cochlear implant signal processing strategy called Stimulation based on Auditory Modeling (SAM), which is completely based on a computational model of the human peripheral auditory system.SAM has been evaluated through simplified models of CI listeners, with five cochlear implant users, and with 27 normal-hearing subjects using an acoustic model of CI perception. Results have always been compared to those acquired using Advanced Combination Encoder (ACE), which is today’s most prevalent CI strategy. First simulations showed that speech intelligibility of CI users fitted with SAM should be just as good as that of CI listeners fitted with ACE. Furthermore, it has been shown that SAM provides more accurate binaural cues, which can potentially enhance the sound source localization ability of bilaterally fitted implantees. Simulations have also revealed an increased amount of temporal pitch information provided by SAM. The subsequent pilot study, which ran smoothly, revealed several benefits of using SAM. First, there was a significant improvement in pitch discrimination of pure tones and sung vowels. Second, CI users fitted with a contralateral hearing aid reported a more natural sound of both speech and music. Third, all subjects were accustomed to SAM in a very short period of time (in the order of 10 to 30 minutes), which is particularly important given that a successful CI strategy change typically takes weeks to months. An additional test with 27 normal-hearing listeners using an acoustic model of CI perception delivered further evidence for improved pitch discrimination ability with SAM as compared to ACE.Although SAM is not yet a market-ready alternative, it strives to pave the way for future strategies based on auditory models and it is a promising candidate for further research and investigation

    Analysis and Implementation of Hybrid FIR Architecture in Speech Processor

    Get PDF
    Hearing aid is an electronic gadget precisely used into the internal ear which reestablishes halfway hearing to smooth hearing. The discourse processor of CI parts the sound-related sign into groups of various frequencies and changes over them into appropriate codes for animating the cathodes in cochlea of ear. The cathode actuates sound-related nerve filaments to give hearing sensation. The expense of the CI alone goes to around 100,000 US dollars. For the efficient less well-to-do individuals with hearing sickness, it might be too exorbitant to even consider affording for this hardware to recoup from the conference misfortune. It gets important to cut down the expense. The cost decrease might be accomplished with diminished region, low force and rapid activity of the CI. This goal intuited both the simple and the computerized based CI originators to inquire about their techniques to give individuals less expensive and profoundly understandable CI. The primary objective of this paper is to develop reconfigurable DSP architectures for the filter banks in speech processor of CI with the following features like minimized area of the filter, reduced power consumption of the speech processor and enhanced presentation of the filter. This paper involves the design and hardware implementation of narrow band pass FIR filter for speech processor of CI using the Xilinx System Generator (XSG) tool on Virtex 7 FPGA

    Optimizing Stimulation Strategies in Cochlear Implants for Music Listening

    Get PDF
    Most cochlear implant (CI) strategies are optimized for speech characteristics while music enjoyment is signicantly below normal hearing performance. In this thesis, electrical stimulation strategies in CIs are analyzed for music input. A simulation chain consisting of two parallel paths, simulating normal hearing conditions and electrical hearing respectively, is utilized. One thesis objective is to congure and develop the sound processor of the CI chain to analyze dierent compression- and channel selection strategies to optimally capture the characteristics of music signals. A new set of knee points (KPs) for the compression function are investigated together with clustering of frequency bands. The N-of-M electrode selection strategy models the eect of a psychoacoustic masking threshold. In order to evaluate the performance of the CI model, the normal hearing model is considered a true reference. Similarity among the resulting neurograms of respective model are measured using the image analysis method Neurogram Similarity Index Measure (NSIM). The validation and resolution of NSIM is another objective of the thesis. Results indicate that NSIM is sensitive to no-activity regions in the neurograms and has diculties capturing small CI changes, i.e. compression settings. Further verication of the model setup is suggested together with investigating an alternative optimal electric hearing reference and/or objective similarity measure

    Approximation and Optimization of an Auditory Model for Realization in VLSI Hardware

    Get PDF
    The Auditory Image Model (AIM) is a software tool set developed to functionally model the role of the ear in the human hearing process. AIM includes detailed filter equations for the major functional portions of the ear. Currently, AIM is run on a workstation and requires 10 to 100 times real-time to process audio information and produce an auditory image. An all-digital approximation of the AIM which is suitable for implementation in very large scale integrated circuits is presented. This document details the mathematical models of AIM and the approximations and optimizations used to simplify the filtering and signal processing accomplished by AIM. Included are the details of an efficient multi-rate architecture designed for sub-micron VLSI technology to carry out the approximated equations. Finally, simulation results which indicate that the architecture, when implemented in 0.8µm CMOS VLSI, will sustain real- time operation on a 32 channel system are included. The same tests also indicate that the chip will be approximately 3.3 mm2, and consume approximately 18 mW. The details of a new and efficient method for computing an approximate logarithm (base two) on binary integers is also presented. The approximate logarithm algorithm is used to convert sound energy into millibels quickly and with low power. Additionally, the algorithm, is easily extended to compute an approximate logarithm in base ten which broadens the class of problems to which it may be applied

    A Computational Approach for the Understanding of Stochastic Resonance Phenomena in the Human Auditory System

    Get PDF
    Stochastic resonance (SR) is a nonlinear phenomenon by which the introduction of noise in a system causes a counterintuitive increase in levels of detection performance of a signal. SR has been extensively studied in different physical and biological systems, including the human auditory system (HAS), where a positive role for noise has been recognized both at the level of peripheral auditory system (PAS) and central nervous system (CNS). This dualism regarding the mechanistic underpinnings of the RS phenomenon in the HAS is confirmed by discrepancies among different experimental studies and reflects on a disagreement about how this phenomenon can be exploited for the improvement of prosthesis and aids devoted to hypoacusic people. HAS is one of the human body’s most complex sensory system. On the other hand, SR involves system nonlinearities. Then, the characterization of SR in the HAS is very challenging and many efforts are being made to characterize this mechanism as a whole. Current computational modelling tools make possible to investigate the phenomena separately in the CNS and in the PAS, then simplifying the analysis of the involved mechanisms. In this work we present a computational model of PAS supporting SR, that shows improved detection of sounds when input noise is added. As preparatory step, we provided a test signal to the system, at the edge of the hearing threshold. As next step, we repeated the experiment adding background noise at different intensities. We found an increase of relative spike count in the frequency bands of the test signal when input noise is added, confirming that the maximum value is obtained under a specific range of added noise, whereas further increase in noise intensity only degrades signal detection or information content

    Coding Strategies for Cochlear Implants Under Adverse Environments

    Get PDF
    Cochlear implants are electronic prosthetic devices that restores partial hearing in patients with severe to profound hearing loss. Although most coding strategies have significantly improved the perception of speech in quite listening conditions, there remains limitations on speech perception under adverse environments such as in background noise, reverberation and band-limited channels, and we propose strategies that improve the intelligibility of speech transmitted over the telephone networks, reverberated speech and speech in the presence of background noise. For telephone processed speech, we propose to examine the effects of adding low-frequency and high- frequency information to the band-limited telephone speech. Four listening conditions were designed to simulate the receiving frequency characteristics of telephone handsets. Results indicated improvement in cochlear implant and bimodal listening when telephone speech was augmented with high frequency information and therefore this study provides support for design of algorithms to extend the bandwidth towards higher frequencies. The results also indicated added benefit from hearing aids for bimodal listeners in all four types of listening conditions. Speech understanding in acoustically reverberant environments is always a difficult task for hearing impaired listeners. Reverberated sounds consists of direct sound, early reflections and late reflections. Late reflections are known to be detrimental to speech intelligibility. In this study, we propose a reverberation suppression strategy based on spectral subtraction to suppress the reverberant energies from late reflections. Results from listening tests for two reverberant conditions (RT60 = 0.3s and 1.0s) indicated significant improvement when stimuli was processed with SS strategy. The proposed strategy operates with little to no prior information on the signal and the room characteristics and therefore, can potentially be implemented in real-time CI speech processors. For speech in background noise, we propose a mechanism underlying the contribution of harmonics to the benefit of electroacoustic stimulations in cochlear implants. The proposed strategy is based on harmonic modeling and uses synthesis driven approach to synthesize the harmonics in voiced segments of speech. Based on objective measures, results indicated improvement in speech quality. This study warrants further work into development of algorithms to regenerate harmonics of voiced segments in the presence of noise

    Optimizing Speech Recognition Using a Computational Model of Human Hearing: Effect of Noise Type and Efferent Time Constants

    Get PDF
    Physiological and psychophysical methods allow for an extended investigation of ascending (afferent) neural pathways from the ear to the brain in mammals, and their role in enhancing signals in noise. However, there is increased interest in descending (efferent) neural fibers in the mammalian auditory pathway. This efferent pathway operates via the olivocochlear system, modifying auditory processing by cochlear innervation and enhancing human ability to detect sounds in noisy backgrounds. Effective speech intelligibility may depend on a complex interaction between efferent time-constants and types of background noise. In this study, an auditory model with efferent-inspired processing provided the front-end to an automatic-speech-recognition system (ASR), used as a tool to evaluate speech recognition with changes in time-constants (50 to 2000 ms) and background noise type (unmodulated and modulated noise). With efferent activation, maximal speech recognition improvement (for both noise types) occurred for signal-to-noise ratios around 10 dB, characteristic of real-world speech-listening situations. Net speech improvement due to efferent activation (NSIEA) was smaller in modulated noise than in unmodulated noise. For unmodulated noise, NSIEA increased with increasing time-constant. For modulated noise, NSIEA increased for time-constants up to 200 ms but remained similar for longer time-constants, consistent with speech-envelope modulation times important to speech recognition in modulated noise. The model improves our understanding of the complex interactions involved in speech recognition in noise, and could be used to simulate the difficulties of speech perception in noise as a consequence of different types of hearing loss

    Idealized computational models for auditory receptive fields

    Full text link
    This paper presents a theory by which idealized models of auditory receptive fields can be derived in a principled axiomatic manner, from a set of structural properties to enable invariance of receptive field responses under natural sound transformations and ensure internal consistency between spectro-temporal receptive fields at different temporal and spectral scales. For defining a time-frequency transformation of a purely temporal sound signal, it is shown that the framework allows for a new way of deriving the Gabor and Gammatone filters as well as a novel family of generalized Gammatone filters, with additional degrees of freedom to obtain different trade-offs between the spectral selectivity and the temporal delay of time-causal temporal window functions. When applied to the definition of a second-layer of receptive fields from a spectrogram, it is shown that the framework leads to two canonical families of spectro-temporal receptive fields, in terms of spectro-temporal derivatives of either spectro-temporal Gaussian kernels for non-causal time or the combination of a time-causal generalized Gammatone filter over the temporal domain and a Gaussian filter over the logspectral domain. For each filter family, the spectro-temporal receptive fields can be either separable over the time-frequency domain or be adapted to local glissando transformations that represent variations in logarithmic frequencies over time. Within each domain of either non-causal or time-causal time, these receptive field families are derived by uniqueness from the assumptions. It is demonstrated how the presented framework allows for computation of basic auditory features for audio processing and that it leads to predictions about auditory receptive fields with good qualitative similarity to biological receptive fields measured in the inferior colliculus (ICC) and primary auditory cortex (A1) of mammals.Comment: 55 pages, 22 figures, 3 table

    Analogue CMOS Cochlea Systems: A Historic Retrospective

    Get PDF
    • …
    corecore