3,685 research outputs found

    Modeling auditory coding: from sound to spikes

    Get PDF

    Development of a flexible modeling environment for evaluating subcortical auditory systems

    Get PDF
    Cochlear Synaptopathy (CS) is an emerging topic of hearing research that focuses on peripheral pathologies which leave pure-tone audiometric thresholds (PTA) unchanged but significantly impair threshold-independent hearing performance. Primary among the proposed mechanisms of CS is selective damage of low spontaneous rate (low SR) fibers of the auditory nerve (AN), yet no noninvasive quantitative measure of CS yet exists in humans. Recent work has established a relationship between Wave V latencies and a psychophysical measure of CS in humans, but current biophysical models do not fully account for the observed results. To begin to address the discrepancies between these experiments and biophysical models of hearing, a new comprehensive modeling tool was developed which allows parametric exploration of modeling space and direct comparison between major models of the auditory nerve and brainstem. More sophisticated models of the midbrain and brainstem were incorporated into the new modeling tool. Incorporating recent anatomical and electrophysiological results, which suggest a varying contribution of low-SR fibers for different audible frequencies, further addresses modeling efficacy

    On the mechanism of response latencies in auditory nerve fibers

    Get PDF
    Despite the structural differences of the middle and inner ears, the latency pattern in auditory nerve fibers to an identical sound has been found similar across numerous species. Studies have shown the similarity in remarkable species with distinct cochleae or even without a basilar membrane. This stimulus-, neuron-, and species- independent similarity of latency cannot be simply explained by the concept of cochlear traveling waves that is generally accepted as the main cause of the neural latency pattern. An original concept of Fourier pattern is defined, intended to characterize a feature of temporal processing—specifically phase encoding—that is not readily apparent in more conventional analyses. The pattern is created by marking the first amplitude maximum for each sinusoid component of the stimulus, to encode phase information. The hypothesis is that the hearing organ serves as a running analyzer whose output reflects synchronization of auditory neural activity consistent with the Fourier pattern. A combined research of experimental, correlational and meta-analysis approaches is used to test the hypothesis. Manipulations included phase encoding and stimuli to test their effects on the predicted latency pattern. Animal studies in the literature using the same stimulus were then compared to determine the degree of relationship. The results show that each marking accounts for a large percentage of a corresponding peak latency in the peristimulus-time histogram. For each of the stimuli considered, the latency predicted by the Fourier pattern is highly correlated with the observed latency in the auditory nerve fiber of representative species. The results suggest that the hearing organ analyzes not only amplitude spectrum but also phase information in Fourier analysis, to distribute the specific spikes among auditory nerve fibers and within a single unit. This phase-encoding mechanism in Fourier analysis is proposed to be the common mechanism that, in the face of species differences in peripheral auditory hardware, accounts for the considerable similarities across species in their latency-by-frequency functions, in turn assuring optimal phase encoding across species. Also, the mechanism has the potential to improve phase encoding of cochlear implants

    A Comparative Study of Computational Models of Auditory Peripheral System

    Full text link
    A deep study about the computational models of the auditory peripheral system from three different research groups: Carney, Meddis and Hemmert, is presented here. The aim is to find out which model fits the data best and which properties of the models are relevant for speech recognition. To get a first approximation, different tests with tones have been performed with seven models. Then we have evaluated the results of these models in the presence of speech. Therefore, two models were studied deeply through an automatic speech recognition (ASR) system, in clean and noisy background and for a diversity of sound levels. The post stimulus time histogram help us to see how the models that improved the offset adaptation present the Âżdead timeÂż. For its part, the synchronization evaluation for tones and modulated signals, have highlighted the better result from the models with offset adaptation. Finally, tuning curves and Q10dB (added to ASR results) on contrary have indicated that the selectivity is not a property needed for speech recognition. Besides the evaluation of the models with ASR have demonstrated the outperforming of models with offset adaptation and the triviality of using cat or human tuning for speech recognition. With this results, we conclude that mostly the model that better fits the data is the one described by Zilany et al. (2009) and the property unquestionable for speech recognition would be a good offset adaptation that offers a better synchronization and a better ASR result. For ASR system it makes no big difference if offset adaptation comes from a shift of the auditory nerve response or from a power law adaptation in the synapse.Vendrell Llopis, N. (2010). A Comparative Study of Computational Models of Auditory Peripheral System. http://hdl.handle.net/10251/20433.Archivo delegad

    A multimodal neuroimaging study of somatosensory system

    Get PDF
    The thesis is the result of a training by the Magnetoencephalography (MEG)-lab by the Center mind/brain science of the university of Trento. Final goal of the analysis was answering the question if MEG is capable to capture activities from the subcortical brain areas and to follow the neural information flow up along the fibers to the cortex. First aim of the thesis is describing the project and developing of an experiment on the somatosensory system that I executed by the CIMeC. The somatosensory system was activated by applying electrical stimulation to the median nerve and MEG signal during this stimulation was recorded. Also MRI and diffusion MRI data of the subject were collected. Further aim of the thesis is to describe the analysis I executed on the collected data. For this purpose the MEG source localization was executed and also Monte-Carlo simulation. The data obtained were integrated with the information obtained from diffusion MRI. Satisfactory results were obtained although we could not prove definitely the result

    Techniques of EMG signal analysis: detection, processing, classification and applications

    Get PDF
    Electromyography (EMG) signals can be used for clinical/biomedical applications, Evolvable Hardware Chip (EHW) development, and modern human computer interaction. EMG signals acquired from muscles require advanced methods for detection, decomposition, processing, and classification. The purpose of this paper is to illustrate the various methodologies and algorithms for EMG signal analysis to provide efficient and effective ways of understanding the signal and its nature. We further point up some of the hardware implementations using EMG focusing on applications related to prosthetic hand control, grasp recognition, and human computer interaction. A comparison study is also given to show performance of various EMG signal analysis methods. This paper provides researchers a good understanding of EMG signal and its analysis procedures. This knowledge will help them develop more powerful, flexible, and efficient applications

    Auf einem menschlichen Gehörmodell basierende Elektrodenstimulationsstrategie für Cochleaimplantate

    Get PDF
    Cochleaimplantate (CI), verbunden mit einer professionellen Rehabilitation, haben mehreren hunderttausenden Hörgeschädigten die verbale Kommunikation wieder ermöglicht. Betrachtet man jedoch die Rehabilitationserfolge, so haben CI-Systeme inzwischen ihre Grenzen erreicht. Die Tatsache, dass die meisten CI-Träger nicht in der Lage sind, Musik zu genießen oder einer Konversation in geräuschvoller Umgebung zu folgen, zeigt, dass es noch Raum für Verbesserungen gibt.Diese Dissertation stellt die neue CI-Signalverarbeitungsstrategie Stimulation based on Auditory Modeling (SAM) vor, die vollständig auf einem Computermodell des menschlichen peripheren Hörsystems beruht.Im Rahmen der vorliegenden Arbeit wurde die SAM Strategie dreifach evaluiert: mit vereinfachten Wahrnehmungsmodellen von CI-Nutzern, mit fünf CI-Nutzern, und mit 27 Normalhörenden mittels eines akustischen Modells der CI-Wahrnehmung. Die Evaluationsergebnisse wurden stets mit Ergebnissen, die durch die Verwendung der Advanced Combination Encoder (ACE) Strategie ermittelt wurden, verglichen. ACE stellt die zurzeit verbreitetste Strategie dar. Erste Simulationen zeigten, dass die Sprachverständlichkeit mit SAM genauso gut wie mit ACE ist. Weiterhin lieferte SAM genauere binaurale Merkmale, was potentiell zu einer Verbesserung der Schallquellenlokalisierungfähigkeit führen kann. Die Simulationen zeigten ebenfalls einen erhöhten Anteil an zeitlichen Pitchinformationen, welche von SAM bereitgestellt wurden. Die Ergebnisse der nachfolgenden Pilotstudie mit fünf CI-Nutzern zeigten mehrere Vorteile von SAM auf. Erstens war eine signifikante Verbesserung der Tonhöhenunterscheidung bei Sinustönen und gesungenen Vokalen zu erkennen. Zweitens bestätigten CI-Nutzer, die kontralateral mit einem Hörgerät versorgt waren, eine natürlicheren Klangeindruck. Als ein sehr bedeutender Vorteil stellte sich drittens heraus, dass sich alle Testpersonen in sehr kurzer Zeit (ca. 10 bis 30 Minuten) an SAM gewöhnen konnten. Dies ist besonders wichtig, da typischerweise Wochen oder Monate nötig sind. Tests mit Normalhörenden lieferten weitere Nachweise für die verbesserte Tonhöhenunterscheidung mit SAM.Obwohl SAM noch keine marktreife Alternative ist, versucht sie den Weg für zukünftige Strategien, die auf Gehörmodellen beruhen, zu ebnen und ist somit ein erfolgversprechender Kandidat für weitere Forschungsarbeiten.Cochlear implants (CIs) combined with professional rehabilitation have enabled several hundreds of thousands of hearing-impaired individuals to re-enter the world of verbal communication. Though very successful, current CI systems seem to have reached their peak potential. The fact that most recipients claim not to enjoy listening to music and are not capable of carrying on a conversation in noisy or reverberative environments shows that there is still room for improvement.This dissertation presents a new cochlear implant signal processing strategy called Stimulation based on Auditory Modeling (SAM), which is completely based on a computational model of the human peripheral auditory system.SAM has been evaluated through simplified models of CI listeners, with five cochlear implant users, and with 27 normal-hearing subjects using an acoustic model of CI perception. Results have always been compared to those acquired using Advanced Combination Encoder (ACE), which is today’s most prevalent CI strategy. First simulations showed that speech intelligibility of CI users fitted with SAM should be just as good as that of CI listeners fitted with ACE. Furthermore, it has been shown that SAM provides more accurate binaural cues, which can potentially enhance the sound source localization ability of bilaterally fitted implantees. Simulations have also revealed an increased amount of temporal pitch information provided by SAM. The subsequent pilot study, which ran smoothly, revealed several benefits of using SAM. First, there was a significant improvement in pitch discrimination of pure tones and sung vowels. Second, CI users fitted with a contralateral hearing aid reported a more natural sound of both speech and music. Third, all subjects were accustomed to SAM in a very short period of time (in the order of 10 to 30 minutes), which is particularly important given that a successful CI strategy change typically takes weeks to months. An additional test with 27 normal-hearing listeners using an acoustic model of CI perception delivered further evidence for improved pitch discrimination ability with SAM as compared to ACE.Although SAM is not yet a market-ready alternative, it strives to pave the way for future strategies based on auditory models and it is a promising candidate for further research and investigation
    • …
    corecore