24,532 research outputs found

    Information Loss in the Human Auditory System

    Full text link
    From the eardrum to the auditory cortex, where acoustic stimuli are decoded, there are several stages of auditory processing and transmission where information may potentially get lost. In this paper, we aim at quantifying the information loss in the human auditory system by using information theoretic tools. To do so, we consider a speech communication model, where words are uttered and sent through a noisy channel, and then received and processed by a human listener. We define a notion of information loss that is related to the human word recognition rate. To assess the word recognition rate of humans, we conduct a closed-vocabulary intelligibility test. We derive upper and lower bounds on the information loss. Simulations reveal that the bounds are tight and we observe that the information loss in the human auditory system increases as the signal to noise ratio (SNR) decreases. Our framework also allows us to study whether humans are optimal in terms of speech perception in a noisy environment. Towards that end, we derive optimal classifiers and compare the human and machine performance in terms of information loss and word recognition rate. We observe a higher information loss and lower word recognition rate for humans compared to the optimal classifiers. In fact, depending on the SNR, the machine classifier may outperform humans by as much as 8 dB. This implies that for the speech-in-stationary-noise setup considered here, the human auditory system is sub-optimal for recognizing noisy words

    Discovery of a lipid synthesising organ in the auditory system of an insect

    Get PDF
    Weta possess typical Ensifera ears. Each ear comprises three functional parts: two equally sized tympanal membranes, an underlying system of modified tracheal chambers, and the auditory sensory organ, the crista acustica. This organ sits within an enclosed fluid-filled channel–previously presumed to be hemolymph. The role this channel plays in insect hearing is unknown. We discovered that the fluid within the channel is not actually hemolymph, but a medium composed principally of lipid from a new class. Three-dimensional imaging of this lipid channel revealed a previously undescribed tissue structure within the channel, which we refer to as the olivarius organ. Investigations into the function of the olivarius reveal de novo lipid synthesis indicating that it is producing these lipids in situ from acetate. The auditory role of this lipid channel was investigated using Laser Doppler vibrometry of the tympanal membrane, which shows that the displacement of the membrane is significantly increased when the lipid is removed from the auditory system. Neural sensitivity of the system, however, decreased upon removal of the lipid–a surprising result considering that in a typical auditory system both the mechanical and auditory sensitivity are positively correlated. These two results coupled with 3D modelling of the auditory system lead us to hypothesize a model for weta audition, relying strongly on the presence of the lipid channel. This is the first instance of lipids being associated with an auditory system outside of the Odentocete cetaceans, demonstrating convergence for the use of lipids in hearing

    Musical notes classification with Neuromorphic Auditory System using FPGA and a Convolutional Spiking Network

    Get PDF
    In this paper, we explore the capabilities of a sound classification system that combines both a novel FPGA cochlear model implementation and a bio-inspired technique based on a trained convolutional spiking network. The neuromorphic auditory system that is used in this work produces a form of representation that is analogous to the spike outputs of the biological cochlea. The auditory system has been developed using a set of spike-based processing building blocks in the frequency domain. They form a set of band pass filters in the spike-domain that splits the audio information in 128 frequency channels, 64 for each of two audio sources. Address Event Representation (AER) is used to communicate the auditory system with the convolutional spiking network. A layer of convolutional spiking network is developed and trained on a computer with the ability to detect two kinds of sound: artificial pure tones in the presence of white noise and electronic musical notes. After the training process, the presented system is able to distinguish the different sounds in real-time, even in the presence of white noise.Ministerio de EconomĂ­a y Competitividad TEC2012-37868-C04-0

    Auditory pathways: are 'what' and 'where' appropriate?

    Get PDF
    New evidence confirms that the auditory system encompasses temporal, parietal and frontal brain regions, some of which partly overlap with the visual system. But common assumptions about the functional homologies between sensory systems may be misleading

    The Generation of Direction Selectivity in the Auditory System

    Get PDF
    Both human speech and animal vocal signals contain frequency-modulated (FM) sounds. Although central auditory neurons that selectively respond to the direction of frequency modulation are known, the synaptic mechanisms underlying the generation of direction selectivity (DS) remain elusive. Here we show the emergence of DS neurons in the inferior colliculus by mapping the three major subcortical auditory nuclei. Cell-attached recordings reveal a highly reliable and precise firing of DS neurons to FM sweeps in a preferred direction. By using in vivo whole-cell current-clamp and voltage-clamp recordings, we found that the synaptic inputs to DS neurons are not direction selective, but temporally reversed excitatory and inhibitory synaptic inputs are evoked in response to opposing directions of FM sweeps. The construction of such temporal asymmetry, resulting DS, and its topography can be attributed to the spectral disparity of the excitatory and the inhibitory synaptic tonal receptive fields

    Live Demonstration: Real-time neuro-inspired sound source localization and tracking architecture applied to a robotic platform

    Get PDF
    This live demonstration presents a sound source localization and tracking system implemented with Spike Signal Processing (SSP) building blocks on FPGA devices. The system architecture is based on the ability of the mammalian auditory system to locate the direction of a sound in the horizontal plane using the interaural intensity difference. We used a binaural Neuromorphic Auditory Sensor to obtain spike rates similar to those generated by the inner hair cells of the human auditory system and the component that obtains the interaural intensity difference is inspired by the lateral superior olive. The spike stream that represents the interaural intensity difference is used to turn a robotic platform towards the sound source direction. The system was tested with pure tones (1-kHz, 2.5-kHz and 5- kHz sounds) with an average error of 2.32 degrees.Ministerio de EconomĂ­a y Competitividad TEC2016-77785-

    Loudness (annoyance), prediction procedure for steady sounds

    Get PDF
    Method has been devised to predict loudness level of any steady sound solely from its measured power spectrum level. Method is based on assumption that, with respect to loudness sensation, the human auditory system acts as open-loop transmission system with transmittance function determined from measured tone curves

    Sound Recognition System Using Spiking and MLP Neural Networks

    Get PDF
    In this paper, we explore the capabilities of a sound classification system that combines a Neuromorphic Auditory System for feature extraction and an artificial neural network for classification. Two models of neural network have been used: Multilayer Perceptron Neural Network and Spiking Neural Network. To compare their accuracies, both networks have been developed and trained to recognize pure tones in presence of white noise. The spiking neural network has been implemented in a FPGA device. The neuromorphic auditory system that is used in this work produces a form of representation that is analogous to the spike outputs of the biological cochlea. Both systems are able to distinguish the different sounds even in the presence of white noise. The recognition system based in a spiking neural networks has better accuracy, above 91 %, even when the sound has white noise with the same power.Ministerio de EconomĂ­a y Competitividad TEC2012-37868-C04-02Junta de AndalucĂ­a P12-TIC-130

    The mechanisms of tinnitus: perspectives from human functional neuroimaging

    Get PDF
    In this review, we highlight the contribution of advances in human neuroimaging to the current understanding of central mechanisms underpinning tinnitus and explain how interpretations of neuroimaging data have been guided by animal models. The primary motivation for studying the neural substrates of tinnitus in humans has been to demonstrate objectively its representation in the central auditory system and to develop a better understanding of its diverse pathophysiology and of the functional interplay between sensory, cognitive and affective systems. The ultimate goal of neuroimaging is to identify subtypes of tinnitus in order to better inform treatment strategies. The three neural mechanisms considered in this review may provide a basis for TI classification. While human neuroimaging evidence strongly implicates the central auditory system and emotional centres in TI, evidence for the precise contribution from the three mechanisms is unclear because the data are somewhat inconsistent. We consider a number of methodological issues limiting the field of human neuroimaging and recommend approaches to overcome potential inconsistency in results arising from poorly matched participants, lack of appropriate controls and low statistical power
    • 

    corecore