687 research outputs found

    Evaluation of room acoustic qualities and defects by use of auralization

    Get PDF

    Uticaj slušnih pomagala na auditivnu percepciju i neposredno verbalno pamćenje kod dece sa bimodalnom stimulacijom

    Get PDF
    Introduction: The combination of electric stimulation from cochlear implant (CI) with acoustic stimulation from hearing aid (HA), otherwise known as bimodal hearing, may provide several binaural benefits including binaural summation, binaural squelch, reduction of the head shadow effect, and improved localization. Purpose: This study investigated the influence of preoperative rehabilitation and bilateral HA use, bimodal stimulation post-implantation (CI on one ear and HA on the non-implanted ear) and hearing thresholds in the verbal short-term memory. Method: Immediate verbal memory test for Serbian language consisting of four subtests was used for auditory perception testing on 21 pre-lingually deaf children. Results: Duration of bimodal hearing proved to be significant in the terms of auditory perception and verbal short-term memory. Mid- and high-frequency amplified thresholds on the non-implanted ear were correlated with poorer perception and reproduction of monosyllables and nonsense words. Conclusion: Duration of bimodal hearing proved to be significant in the terms of auditory perception, speech reproduction and semantic ability. Patients with a unilateral cochlear implant who have measurable residual hearing in the non-implanted ear should be individually fitted with a hearing aid in that ear, to improve speech perception and maximize binaural sensitivity.Kombinacija električne stimulacije kohlearnog implanta (KI) i akustične stimulacije slušnog pomagala (SP), poznata kao bimodalni sluh, može imati razne binauralne prednosti koje uključuju binauralnu stimulaciju, binauralno sažimanje, redukciju eho efekta i unapređenje lokalizacije izvora zvuka. Cilj: U ovom istraživanju je ispitan uticaj preoperativne rehabilitacije i upotrebe bilateralnog slušnog pomagala, bimodalne stimulacije nakon implantacije (KI na jednom uhu i SP na neimplantiranom uhu) i pragova sluha u implantiranom i neimplantiranom uhu na auditivnu percepciju i verbalno kratkotrajno pamćenje. Metod: Za ispitivanje auditivne percepcije kod dvadeset jednog prelingvalno gluvog deteta korišćen je Test za ispitivanje verbalnog pamćenja za srpski jezik, koji se sastoji od četiri podtesta. Rezultati: Pokazalo se da je trajanje bimodalnog sluha značajno kod auditivne percepcije i verbalnog kratkotrajnog pamćenja. Povećani pragovi srednje i visoke frekvencije na neimplantiranom uhu bili su u korelaciji sa slabijom percepcijom i reprodukcijom jednosložnih i besmislenih reči. Zaključak: Pokazalo se da je trajanje bimodalnog sluha značajno za auditivnu percepciju, reprodukciju govora i semantičku sposobnost. Pacijentima sa unilateralnim kohlearnim implantom, sa merljivim rezidualnim sluhom na neimplantiranom uhu, trebalo bi ugraditi slušno pomagalo u to uho, kako bi se poboljšala percepcija govora i maksimizovala binauralna osetljivost

    Physiology-based model of multi-source auditory processing

    Full text link
    Our auditory systems are evolved to process a myriad of acoustic environments. In complex listening scenarios, we can tune our attention to one sound source (e.g., a conversation partner), while monitoring the entire acoustic space for cues we might be interested in (e.g., our names being called, or the fire alarm going off). While normal hearing listeners handle complex listening scenarios remarkably well, hearing-impaired listeners experience difficulty even when wearing hearing-assist devices. This thesis presents both theoretical work in understanding the neural mechanisms behind this process, as well as the application of neural models to segregate mixed sources and potentially help the hearing impaired population. On the theoretical side, auditory spatial processing has been studied primarily up to the midbrain region, and studies have shown how individual neurons can localize sounds using spatial cues. Yet, how higher brain regions such as the cortex use this information to process multiple sounds in competition is not clear. This thesis demonstrates a physiology-based spiking neural network model, which provides a mechanism illustrating how the auditory cortex may organize up-stream spatial information when there are multiple competing sound sources in space. Based on this model, an engineering solution to help hearing-impaired listeners segregate mixed auditory inputs is proposed. Using the neural model to perform sound-segregation in the neural domain, the neural outputs (representing the source of interest) are reconstructed back to the acoustic domain using a novel stimulus reconstruction method.2017-09-22T00:00:00

    Sound processing in the mouse auditory cortex: organization, modulation, and transformation

    Full text link
    The auditory system begins with the cochlea, a frequency analyzer and signal amplifier with exquisite precision. As neural information travels towards higher brain regions, the encoding becomes less faithful to the sound waveform itself and more influenced by non-sensory factors such as top-down attentional modulation, local feedback modulation, and long-term changes caused by experience. At the level of auditory cortex (ACtx), such influences exhibit at multiple scales from single neurons to cortical columns to topographic maps, and are known to be linked with critical processes such as auditory perception, learning, and memory. How the ACtx integrates a wealth of diverse inputs while supporting adaptive and reliable sound representations is an important unsolved question in auditory neuroscience. This dissertation tackles this question using the mouse as an animal model. We begin by describing a detailed functional map of receptive fields within the mouse ACtx. Focusing on the frequency tuning properties, we demonstrated a robust tonotopic organization in the core ACtx fields (A1 and AAF) across cortical layers, neural signal types, and anesthetic states, confirming the columnar organization of basic sound processing in ACtx. We then studied the bottom-up input to ACtx columns by optogenetically activating the inferior colliculus (IC), and observed feedforward neuronal activity in the frequency-matched column, which also induced clear auditory percepts in behaving mice. Next, we used optogenetics to study layer 6 corticothalamic neurons (L6CT) that project heavily to the thalamus and upper layers of ACtx. We found that L6CT activation biases sound perception towards either enhanced detection or discrimination depending on its relative timing with respect to the sound, a process that may support dynamic filtering of auditory information. Finally, we optogenetically isolated cholinergic neurons in the basal forebrain (BF) that project to ACtx and studied their involvement in columnar ACtx plasticity during associative learning. In contrast to previous notions that BF just encodes reward and punishment, we observed clear auditory responses from the cholinergic neurons, which exhibited rapid learning-induced plasticity, suggesting that BF may provide a key instructive signal to drive adaptive plasticity in ACtx

    Psychophysical and electrophysiological investigations into the mechanisms supporting everyday communication

    Full text link
    Thesis (Ph.D.)--Boston UniversityHumans solve the so-called "cocktail party problem" with relative ease, and are generally able to selectively direct their attention to process and recall acoustic information from one sound source in the presence of other irrelevant stimuli that are competing for cognitive resources. This ability depends on a variety of factors, including volitional control of selective attention, the ability to store information in memory for recall at a later time, and the ability to integrate information across multiple sensory modalities. Here, psychophysical and electroencephalography (EEG) experiments were conducted to study these three factors. The effects of selective attention on cortical and subcortical structures were examined using EEG recorded during a dichotic listening task. Cortical potentials showed robust effects of attention (demonstrated by the ability to classify responses to attended and ignored speech based on short segments of EEG responses); however, potentials originating in the brainstem did not, even though stimuli were engineered to maximize the separability of the neural representation of the competing sources in the auditory periphery and thus the possibility of seeing attention-specific modulation of subcortical responses. In another study, the relationship between object formation and memory processing was explored in a psychophysical experiment examining how sequences of nonverbal auditory stimuli are stored and recalled from short-term memory. The results of this study support the notion that auditory short-term memory, like visual short-term memory, can be explained in terms of object formation. In particular, short-term memory performance is affected by stream formation and the perceptual costs involved in switching attention between multiple streams. Finally, effects of audiovisual integration were studied in a psychophysical experiment using complex speech-like stimuli (zebra finch songs). Results show visual cues improve performance differently depending on whether target identification is limited by energetic masking or whether it is limited by object formation difficulties and uncertainty about when a target occurs. Together, these studies support the idea that everyday communication depends on an interplay of many mechanisms including attention, memory, and multisensory integration, each of which is influenced by perceptual organization

    Robust Neuronal Discrimination in Primary Auditory Cortex Despite Degradations of Spectro-temporal Acoustic Details: Comparison Between Guinea Pigs with Normal Hearing and Mild Age-Related Hearing Loss

    Get PDF
    International audienceThis study investigated to which extent the primary auditory cortex of young normal-hearing and mild hearing-impaired aged animals is able to maintain invariant representation of critical temporal-modulation features when sounds are submitted to degradations of fine spectro-temporal acoustic details. This was achieved by recording ensemble of cortical responses to conspecific vocalizations in guinea pigs with either normal hearing or mild age-related sensorineural hearing loss. The vocalizations were degraded using a tone vocoder. The neuronal responses and their discrimination capacities (estimated by mutual information) were analyzed at single recording and population levels. For normal-hearing animals, the neuronal responses decreased as a function of the number of the vocoder frequency bands, so did their discriminative capacities at the single recording level. However, small neuronal populations were found to be robust to the degradations induced by the vocoder. Similar robustness was obtained when broadband noise was added to exacerbate further the spectro-temporal distortions produced by the vocoder. A comparable pattern of robustness to degradations in fine spectro-temporal details was found for hearing-impaired animals. However, the latter showed an overall decrease in neuronal discrimination capacities between vocalizations in noisy conditions. Consistent with previous studies, these results demonstrate that the primary auditory cortex maintains robust neural representation of temporal envelope features for communication sounds under a large range of spectro-temporal degradations

    TEMPORAL CODING OF SPEECH IN HUMAN AUDITORY CORTEX

    Get PDF
    Human listeners can reliably recognize speech in complex listening environments. The underlying neural mechanisms, however, remain unclear and cannot yet be emulated by any artificial system. In this dissertation, we study how speech is represented in the human auditory cortex and how the neural representation contributes to reliable speech recognition. Cortical activity from normal hearing human subjects is noninvasively recorded using magnetoencephalography, during natural speech listening. It is first demonstrated that neural activity from auditory cortex is precisely synchronized to the slow temporal modulations of speech, when the speech signal is presented in a quiet listening environment. How this neural representation is affected by acoustic interference is then investigated. Acoustic interference degrades speech perception via two mechanisms, informational masking and energetic masking, which are addressed respectively by using a competing speech stream and a stationary noise as the interfering sound. When two speech streams are presented simultaneously, cortical activity is predominantly synchronized to the speech stream the listener attends to, even if the unattended, competing speech stream is 8 dB more intense. When speech is presented together with spectrally matched stationary noise, cortical activity remains precisely synchronized to the temporal modulations of speech until the noise is 9 dB more intense. Critically, the accuracy of neural synchronization to speech predicts how well individual listeners can understand speech in noise. Further analysis reveals that two neural sources contribute to speech synchronized cortical activity, one with a shorter response latency of about 50 ms and the other with a longer response latency of about 100 ms. The longer-latency component, but not the shorter-latency component, shows selectivity to the attended speech and invariance to background noise, indicating a transition from encoding the acoustic scene to encoding the behaviorally important auditory object, in auditory cortex. Taken together, we have demonstrated that during natural speech comprehension, neural activity in the human auditory cortex is precisely synchronized to the slow temporal modulations of speech. This neural synchronization is robust to acoustic interference, whether speech or noise, and therefore provides a strong candidate for the neural basis of acoustic background invariant speech recognition

    Sound processing in the mouse auditory cortex: organization, modulation, and transformation

    Full text link
    The auditory system begins with the cochlea, a frequency analyzer and signal amplifier with exquisite precision. As neural information travels towards higher brain regions, the encoding becomes less faithful to the sound waveform itself and more influenced by non-sensory factors such as top-down attentional modulation, local feedback modulation, and long-term changes caused by experience. At the level of auditory cortex (ACtx), such influences exhibit at multiple scales from single neurons to cortical columns to topographic maps, and are known to be linked with critical processes such as auditory perception, learning, and memory. How the ACtx integrates a wealth of diverse inputs while supporting adaptive and reliable sound representations is an important unsolved question in auditory neuroscience. This dissertation tackles this question using the mouse as an animal model. We begin by describing a detailed functional map of receptive fields within the mouse ACtx. Focusing on the frequency tuning properties, we demonstrated a robust tonotopic organization in the core ACtx fields (A1 and AAF) across cortical layers, neural signal types, and anesthetic states, confirming the columnar organization of basic sound processing in ACtx. We then studied the bottom-up input to ACtx columns by optogenetically activating the inferior colliculus (IC), and observed feedforward neuronal activity in the frequency-matched column, which also induced clear auditory percepts in behaving mice. Next, we used optogenetics to study layer 6 corticothalamic neurons (L6CT) that project heavily to the thalamus and upper layers of ACtx. We found that L6CT activation biases sound perception towards either enhanced detection or discrimination depending on its relative timing with respect to the sound, a process that may support dynamic filtering of auditory information. Finally, we optogenetically isolated cholinergic neurons in the basal forebrain (BF) that project to ACtx and studied their involvement in columnar ACtx plasticity during associative learning. In contrast to previous notions that BF just encodes reward and punishment, we observed clear auditory responses from the cholinergic neurons, which exhibited rapid learning-induced plasticity, suggesting that BF may provide a key instructive signal to drive adaptive plasticity in ACtx
    corecore