110,082 research outputs found

    Information Loss in the Human Auditory System

    Full text link
    From the eardrum to the auditory cortex, where acoustic stimuli are decoded, there are several stages of auditory processing and transmission where information may potentially get lost. In this paper, we aim at quantifying the information loss in the human auditory system by using information theoretic tools. To do so, we consider a speech communication model, where words are uttered and sent through a noisy channel, and then received and processed by a human listener. We define a notion of information loss that is related to the human word recognition rate. To assess the word recognition rate of humans, we conduct a closed-vocabulary intelligibility test. We derive upper and lower bounds on the information loss. Simulations reveal that the bounds are tight and we observe that the information loss in the human auditory system increases as the signal to noise ratio (SNR) decreases. Our framework also allows us to study whether humans are optimal in terms of speech perception in a noisy environment. Towards that end, we derive optimal classifiers and compare the human and machine performance in terms of information loss and word recognition rate. We observe a higher information loss and lower word recognition rate for humans compared to the optimal classifiers. In fact, depending on the SNR, the machine classifier may outperform humans by as much as 8 dB. This implies that for the speech-in-stationary-noise setup considered here, the human auditory system is sub-optimal for recognizing noisy words

    Sensing Super-Position: Human Sensing Beyond the Visual Spectrum

    Get PDF
    The coming decade of fast, cheap and miniaturized electronics and sensory devices opens new pathways for the development of sophisticated equipment to overcome limitations of the human senses. This paper addresses the technical feasibility of augmenting human vision through Sensing Super-position by mixing natural Human sensing. The current implementation of the device translates visual and other passive or active sensory instruments into sounds, which become relevant when the visual resolution is insufficient for very difficult and particular sensing tasks. A successful Sensing Super-position meets many human and pilot vehicle system requirements. The system can be further developed into cheap, portable, and low power taking into account the limited capabilities of the human user as well as the typical characteristics of his dynamic environment. The system operates in real time, giving the desired information for the particular augmented sensing tasks. The Sensing Super-position device increases the image resolution perception and is obtained via an auditory representation as well as the visual representation. Auditory mapping is performed to distribute an image in time. The three-dimensional spatial brightness and multi-spectral maps of a sensed image are processed using real-time image processing techniques (e.g. histogram normalization) and transformed into a two-dimensional map of an audio signal as a function of frequency and time. This paper details the approach of developing Sensing Super-position systems as a way to augment the human vision system by exploiting the capabilities of Lie human hearing system as an additional neural input. The human hearing system is capable of learning to process and interpret extremely complicated and rapidly changing auditory patterns. The known capabilities of the human hearing system to learn and understand complicated auditory patterns provided the basic motivation for developing an image-to-sound mapping system. The human brain is superior to most existing computer systems in rapidly extracting relevant information from blurred, noisy, and redundant images. From a theoretical viewpoint, this means that the available bandwidth is not exploited in an optimal way. While image-processing techniques can manipulate, condense and focus the information (e.g., Fourier Transforms), keeping the mapping as direct and simple as possible might also reduce the risk of accidentally filtering out important clues. After all, especially a perfect non-redundant sound representation is prone to loss of relevant information in the non-perfect human hearing system. Also, a complicated non-redundant image-to-sound mapping may well be far more difficult to learn and comprehend than a straightforward mapping, while the mapping system would increase in complexity and cost. This work will demonstrate some basic information processing for optimal information capture for headmounted systems

    Measurement and 3D Finite Element Modeling of Blast Wave Transmission through Chinchilla Ear

    Get PDF
    Hearing loss caused by blast exposure is an inherent risk that active Service members face due to the operational activities they engage in. With auditory system dysfunction dominating service-connected disabilities among Veterans, there is an urgent need to better understand the effects of blast exposure on the auditory system, particularly the effects of repeated low-intensity blast exposure on progressive hearing loss. Furthermore, the analysis of blast wave transmission in the ear is needed. This thesis focuses on an experimental study using chinchilla animal model. Chinchilla with and without earplugs were exposed to repeated low-intensity blasts. Hearing function tests reflecting the state of the auditory system were measured prior to blast, after blast, and were then monitored over 14 days. This thesis also reports the creation of the first finite element model of the entire chinchilla ear, including spiral cochlea. A finite element (FE) model of the chinchilla cochlea was integrated with our lab’s previously published FE model of the chinchilla middle ear. The model was first evaluated for simulating acoustic sound transmission. A uniform acoustic pressure applied as an input and harmonic response analysis was conducted. The model was then validated by comparing model-predicted movements of ear structures with experimental measurements. The FE model of the entire chinchilla ear was then adapted for blast wave analysis. Pressure waveforms measured during chinchilla blast exposure studies were applied to the model as input. The model-predicted waveforms at locations within the ear were then compared with experimental waveforms recorded in the same locations. Movement of structures within the ear were also predicted. The work presented in this thesis improves our understanding of the effects of blast exposure on the auditory system. Experimental data collected from chinchilla animal model provides insight into the effect of low-intensity blasts on hearing damage, which is not well studied. Moreover, this study provides information on the central auditory system, which is lacking in the literature. Furthermore, this thesis reports the first FE model of the entire chinchilla ear. This model provides a computational tool to simulate the sound or blast wave transmission through the chinchilla ear, explain experimental observations in animal model of chinchilla, and help translate animal experimental data to human responses to blast exposure. Future work includes further investigation of different blast conditions (e.g. number of blasts, blast intensity, recovery time, etc.) on hearing loss and improvement of the FE model for blast wave analysis

    A Novel Software Solution to Diagnose the Hearing Disabilities In Human Beings

    Get PDF
    Ears are one of the most important sensory organs in human beings. They contribute equally to human beings in acquiring information from around the world. They also help in maintaining a sense of balance in addition to providing the human beings with the ability to hear. Majority of the population are over exposed to sound which significantly leads to hearing loss. A person is said to suffer from hearing loss if he loses the ability to perceive the sound within the normal audible range. In such a scenario, an affordable and reliable testing protocol is required to treat the human fraternity [1]. Therefore, the need of the day is to develop a low cost solution to assess the hearing disabilities in human beings by eliminating the requirement of sound proof environment used in traditional hearing test procedures like audiometry. The auditory function in human beings can be assessed using various parameters of sound such as pitch, intensity, frequency, etc. [2] The current work focuses on developing a simple and an affordable software system which is used to assess the threshold of hearing in human beings. MATLAB platform is used in designing the software system. The software has also been standardised by acquiring the threshold of hearing from 44 healthy individuals of both males and females with age group 20-30 years. The results prove that there is a slight improvement in the hearing perception of males when compared to females

    Impact of aging on the auditory system and related cognitive functions: A narrative review

    Get PDF
    Age-related hearing loss (ARHL), presbycusis, is a chronic health condition that affects approximately one-third of the world’s population. The peripheral and central hearing alterations associated with age-related hearing loss have a profound impact on perception of verbal and non-verbal auditory stimuli. The high prevalence of hearing loss in the older adults corresponds to the increased frequency of dementia in this population. Therefore, researchers have focused their attention on age-related central effects that occur independent of the peripheral hearing loss as well as central effects of peripheral hearing loss and its association with cognitive decline and dementia. Here we review the current evidence for the age-related changes of the peripheral and central auditory system and the relationship between hearing loss and pathological cognitive decline and dementia. Furthermore, there is a paucity of evidence on the relationship between ARHL and established biomarkers of Alzheimer’s disease, as the most common cause of dementia. Such studies are critical to be able to consider any causal relationship between dementia and ARHL. While this narrative review will examine the pathophysiological alterations in both the peripheral and central auditory system and its clinical implications, the question remains unanswered whether hearing loss causes cognitive impairment or vice versa

    The mechanisms of tinnitus: perspectives from human functional neuroimaging

    Get PDF
    In this review, we highlight the contribution of advances in human neuroimaging to the current understanding of central mechanisms underpinning tinnitus and explain how interpretations of neuroimaging data have been guided by animal models. The primary motivation for studying the neural substrates of tinnitus in humans has been to demonstrate objectively its representation in the central auditory system and to develop a better understanding of its diverse pathophysiology and of the functional interplay between sensory, cognitive and affective systems. The ultimate goal of neuroimaging is to identify subtypes of tinnitus in order to better inform treatment strategies. The three neural mechanisms considered in this review may provide a basis for TI classification. While human neuroimaging evidence strongly implicates the central auditory system and emotional centres in TI, evidence for the precise contribution from the three mechanisms is unclear because the data are somewhat inconsistent. We consider a number of methodological issues limiting the field of human neuroimaging and recommend approaches to overcome potential inconsistency in results arising from poorly matched participants, lack of appropriate controls and low statistical power

    The Use of Audio in Minimal Access Surgery

    Get PDF
    In minimal access surgery (MAS) (also known as minimally invasive surgery), operations are carried out by making small incisions in the skin and inserting special apparatus into potential body cavities through those incisions. Laparoscopic MAS procedures are conducted in the patient’s abdomen. The aim of MAS is faster recovery, shorter hospitalisation and fewer major post-operative complications; all resulting in lower societal cost with better patient acceptability. The technique is markedly dependent on supporting technologies for vision, instrumentation, energy delivery, anaesthesia, and monitoring. However, in practice, much MAS continues to take longer and be associated with an undesirable frequency of unwanted minor (or occasionally major) mishaps. Many of these difficulties result precisely from the complexity and mal-adaptation of the additional technology and from lack of familiarity with it. A survey of South East England surgeons showed the two main stress factors on surgeons to be the technical difficulty of the procedure and time pressures placed on the surgeon by third parties. Many of the problems associated with MAS operations are linked to the control and monitoring of the equipment. This paper describes work begun to explore ergonomic enhancements to laparoscopic operating technology that could result in faster and safer laparoscopic operations, less surgeon stress and reduce dependence on ancillary staff. Auditory displays have been used to communicate complex information to users in a modality that is complementary to the visual channel. This paper proposes the development of a control and feedback system that will make use of auditory displays to improve the amount of information that can be communicated to the surgeon and his assistant without overloading the visual channel. Control of the system would be enhanced by the addition of voice input to allow the surgeon direct control
    • …
    corecore