2,489 research outputs found

    Microelectronic circuits for noninvasive ear type assistive devices

    Get PDF
    An ear type system and its circuit realization with application as new assistive devices are investigated. The auditory brainstem responses obtained from clinical hearing measurements are utilized for which the ear type systems mimicking the physical and behavioral characteristics of the individual auditory system are developed. In the case that effects from the hearing loss and disorder can be detected via the measured responses, differentiations between normal and impaired characteristics of the human auditory system are made possible from which the new noninvasive way of correcting these undesired effects is proposed. The ear type system of auditory brainstem response is developed using an adaptation of the nonlinear neural network architecture and the system for making a correction is realized using the derived inverse of neural network. Microelectronic circuits of the systems are designed and simulated showing a possibility of developing into a hearing aid type device which potentially helps hearing impaired patients in an alternate and noninvasive useful way

    Plastic Effect of Tetanic Stimulation on Auditory Evoked Potentials

    Get PDF
    The goal of this thesis was to investigate tetanic acoustic stimulation (TS) and its effects on the human auditory system. Two experiments were completed to study the effects of a 2 minute duration 1 kHz TS on the auditory brainstem and cortex using auditory evoked potentials. At the cortical level the auditory long latency response (ALLR) was recorded and the P1, N1, and P2 components were measured; in the brainstem the amplitude of the 80 Hz auditory steady state response (ASSR) was measured. TS induced significant changes in ALLR component latencies, and a significant reduction in ASSR amplitude, but these changes were not specific to the TS acoustic frequency of 1 kHz

    Objective auditory brainstem response classification using machine learning

    Get PDF
    The objective of this study was to use machine learning in the form of a deep neural network to objectively classify paired auditory brainstem response waveforms into either: ‘clear response’, ‘inconclusive’ or ‘response absent’. A deep convolutional neural network was constructed and fine-tuned using stratified 10-fold cross-validation on 190 paired ABR waveforms. The final model was evaluated on a test set of 42 paired waveforms. The full dataset comprised 232 paired ABR waveforms recorded from eight normal-hearing individuals. The dataset was obtained from the PhysioBank database. The paired waveforms were independently labelled by two audiological scientists in order to train the network and evaluate its performance. The trained neural network was able to classify paired ABR waveforms with 92.9% accuracy. The sensitivity and the specificity were 92.9% and 96.4%, respectively. This neural network may have clinical utility in assisting clinicians with waveform classification for the purpose of hearing threshold estimation. Further evaluation using a large clinically obtained dataset would provide further validation with regard to the clinical potential of the neural network in diagnostic adult testing, newborn testing and in automated newborn hearing screening

    A convolutional neural-network model of human cochlear mechanics and filter tuning for real-time applications

    Full text link
    Auditory models are commonly used as feature extractors for automatic speech-recognition systems or as front-ends for robotics, machine-hearing and hearing-aid applications. Although auditory models can capture the biophysical and nonlinear properties of human hearing in great detail, these biophysical models are computationally expensive and cannot be used in real-time applications. We present a hybrid approach where convolutional neural networks are combined with computational neuroscience to yield a real-time end-to-end model for human cochlear mechanics, including level-dependent filter tuning (CoNNear). The CoNNear model was trained on acoustic speech material and its performance and applicability were evaluated using (unseen) sound stimuli commonly employed in cochlear mechanics research. The CoNNear model accurately simulates human cochlear frequency selectivity and its dependence on sound intensity, an essential quality for robust speech intelligibility at negative speech-to-background-noise ratios. The CoNNear architecture is based on parallel and differentiable computations and has the power to achieve real-time human performance. These unique CoNNear features will enable the next generation of human-like machine-hearing applications

    Brainstem Auditory Evoked Potentials And Network Dysfunction In Mild Traumatic Brain Injury

    Get PDF
    Brainstem Auditory Evoked Potentials and Network Dysfunction in Mild Traumatic Brain Injury Theresa L. Williamson BS1,2, Amanda R. Rabinowitz PhD1, Victoria E. Johnson MD1, John A. Wolf PhD1, Michael L. McGarvey MD3, Douglas H. Smith MD1 University of Pennsylvania Department of Neurosurgery Philadelphia, PA 191041, Yale School of Medicine New Haven, CT 065112, University of Pennsylvania Department of Neurology Philadelphia, PA 191043 Introduction: Mild traumatic brain injury (mTBI) challenges clinicians as symptoms do not map in a lesion-specific manner and there is no objective diagnostic measure. Diffuse axonal injury is a main mechanism of injury in mTBI [1, 2]. Injury to axons is proposed to alter the brain\u27s networks and underlie common symptoms such as slow processing speed, poor concentration and memory. Clinical studies show that the auditory network is also commonly disrupted in mTBI and therefore the auditory pathway is a useful surrogate for study to understand network dysfunction as it relates to axonal pathology and signal processing speed. Methods: Decades of research using a rotational acceleration injury model in pigs scaled to the known mechanical loading conditions in humans demonstrates multi-focal swelling of axons [1]. This study utilizes a known model of mTBI to relate diffuse axonal injury to the physiologic functioning of a network. The technique is to record latency, amplitude and morphology of the auditory evoked potential response before, immediately after, and three days after injury as well as conduct a histopathologic investigation of the brainstem auditory pathway for evidence of axonal injury. Results: We have identified increased latency and morphologic changes of the brainstem auditory evoked potential waveforms in swine following injury that correspond to pathology in regions in the upper brainstem, immediately after and at three days post-injury as compared to a pre-injury control measurements. Additionally, we have identified axonal pathology, indicated by amyloid precursor protein positive axonal swellings, in the region of the lateral lemniscus and inferior colliculus. Conclusions: This data shows that in a clinically relevant model of mild traumatic brain injury, damage to axons in a pathway corresponds to functional delay in the pathway\u27s processing. Identifying a link between axonal pathology and function in the auditory pathway is useful to represent network injury throughout the brain shedding light on mTBI\u27s diffuse nature that underlies a group of symptoms that are both difficult to diagnose and treat

    Computational modelling of neural mechanisms underlying natural speech perception

    Get PDF
    Humans are highly skilled at the analysis of complex auditory scenes. In particular, the human auditory system is characterized by incredible robustness to noise and can nearly effortlessly isolate the voice of a specific talker from even the busiest of mixtures. However, neural mechanisms underlying these remarkable properties remain poorly understood. This is mainly due to the inherent complexity of speech signals and multi-stage, intricate processing performed in the human auditory system. Understanding these neural mechanisms underlying speech perception is of interest for clinical practice, brain-computer interfacing and automatic speech processing systems. In this thesis, we developed computational models characterizing neural speech processing across different stages of the human auditory pathways. In particular, we studied the active role of slow cortical oscillations in speech-in-noise comprehension through a spiking neural network model for encoding spoken sentences. The neural dynamics of the model during noisy speech encoding reflected speech comprehension of young, normal-hearing adults. The proposed theoretical model was validated by predicting the effects of non-invasive brain stimulation on speech comprehension in an experimental study involving a cohort of volunteers. Moreover, we developed a modelling framework for detecting the early, high-frequency neural response to the uninterrupted speech in non-invasive neural recordings. We applied the method to investigate top-down modulation of this response by the listener's selective attention and linguistic properties of different words from a spoken narrative. We found that in both cases, the detected responses of predominantly subcortical origin were significantly modulated, which supports the functional role of feedback, between higher- and lower levels stages of the auditory pathways, in speech perception. The proposed computational models shed light on some of the poorly understood neural mechanisms underlying speech perception. The developed methods can be readily employed in future studies involving a range of experimental paradigms beyond these considered in this thesis.Open Acces

    Signal validation in electroencephalography research

    Get PDF

    Effects of age and stimulation strategies on cochlear implantation and a clinically feasible method for sound localization latency

    Get PDF
    Treating prelingual deafness with cochlear implants paves the way for spoken language development. Previous studies have shown that providing the intervention at six to 11 months is better than at 12-17 months. However, interventions at even earlier ages have not been researched to the same extent, for example by comparing five to eight months with nine to 11 months. That is why we retrospectively assessed the surgical risks, and analyzed the longitudinal spoken language tests, of 103 children who received their first cochlear implant between five and 30 months of age. This research particularly focused on surgery before 12 months of age (Paper I). Apart from language development, we expected that early implants would provide access to the interaural time differences that are crucial for localizing low frequency sounds. We were interested to examine this in combination with novel sound processing strategies with stimulation patterns that convey the fine structure of sounds. Therefore, in addition to the retrospective analysis, we studied the relationships between stimulation strategies, lateralization of interaural time differences and horizontal sound localization in 30 children (Paper II). Then we decided to develop a method to objectively assess sound localization latency to complement localization accuracy. A method that assesses latency needed to be validated in adults with normal hearing, and in hampered conditions, so that the relationship between accuracy and latency could be clarified. In our study, the gaze patterns from the localization recordings were modelled by optimizing a sigmoid function (Paper III). Furthermore, we addressed the lack of studies on the normal development of sound localization latency of gaze responses in infancy and early childhood (Paper IV). Our study of spoken language development showed the benefit of cochlear implantation before nine months of age, compared to nine to 11 months of age, without increased surgical risks. This finding was strongest when it came to the age at which the child’s language could be understood (Paper I). When our group of 30 subjects underwent tests for interaural time differences, 10 were able to discriminate within the range of naturally occurring differences. Interestingly, the choice of stimulation strategy was a prerequisite for lateralizing natural interaural time differences. However, no relationships between this ability to lateralize and the ability to localize low frequency sounds were found (Paper II). The localization setup meant that detailed investigations of gaze behavior could be carried out. Eight normal hearing adults demonstrated a mean sound localization latency of 280 ± 40 milliseconds (ms), with distinct prolongation with unilateral earplugging. It is interesting to observe the similarity in latency, dynamic behavior, and overlap of anatomical structures between the acoustic middle ear reflex and sound localization latency (Paper III). In addition, normal hearing infants showed diminished sound localization latency, from 1000 ms at six months of age down to 500 ms at three years of age (Paper IV). Latency in children with early cochlear implants still needs to be studied. The findings in this thesis have important clinical implications for counseling parents and they provide valuable data to guide clinical choices about the age when cochlear implants are provided and processor programming takes place. The fast, objective and non-invasive method of sound localization latency assessment may further enhance the clinical processes of diagnosing and monitoring interventions in children with hearing impairment
    • …
    corecore