6,608 research outputs found

    Sensitivity to interaural time differences in the medial superior olive of a small mammal, the Mexican free-tailed bat

    Get PDF
    Neurons in the medial superior olive (MSO) are thought to encode interaural time differences (ITDs), the main binaural cues used for localizing low-frequency sounds in the horizontal plane. The underlying mechanism is supposed to rely on a coincidence of excitatory inputs from the two ears that are phase-locked to either the stimulus frequency or the stimulus envelope. Extracellular recordings from MSO neurons in several mammals conform with this theory. However, there are two aspects that remain puzzling. The first concerns the role of the MSO in small mammals that have relatively poor low-frequency hearing and whose heads generate only very small ITDs. The second puzzling aspect of the scenario concerns the role of the prominent binaural inhibitory inputs to MSO neurons. We examined these two unresolved issues by recording from MSO cells in the Mexican free-tailed bat. Using sinusoidally amplitude-modulated tones, we found that the ITD sensitivities of many MSO cells in the bat were remarkably similar to those reported for larger mammals. Our data also indicate an important role for inhibition in sharpening ITD sensitivity and increasing the dynamic range of ITD functions. A simple model of ITD coding based on the timing of multiple inputs is proposed. Additionally, our data suggest that ITD coding is a by-product of a neuronal circuit that processes the temporal structure of sounds. Because of the free-tailed bat's small head size, ITD coding is most likely not the major function of the MSO in this small mammal and probably other small mammals

    Speaker Normalization Using Cortical Strip Maps: A Neural Model for Steady State vowel Categorization

    Full text link
    Auditory signals of speech are speaker-dependent, but representations of language meaning are speaker-independent. The transformation from speaker-dependent to speaker-independent language representations enables speech to be learned and understood from different speakers. A neural model is presented that performs speaker normalization to generate a pitch-independent representation of speech sounds, while also preserving information about speaker identity. This speaker-invariant representation is categorized into unitized speech items, which input to sequential working memories whose distributed patterns can be categorized, or chunked, into syllable and word representations. The proposed model fits into an emerging model of auditory streaming and speech categorization. The auditory streaming and speaker normalization parts of the model both use multiple strip representations and asymmetric competitive circuits, thereby suggesting that these two circuits arose from similar neural designs. The normalized speech items are rapidly categorized and stably remembered by Adaptive Resonance Theory circuits. Simulations use synthesized steady-state vowels from the Peterson and Barney [J. Acoust. Soc. Am. 24, 175-184 (1952)] vowel database and achieve accuracy rates similar to those achieved by human listeners. These results are compared to behavioral data and other speaker normalization models.National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624

    Speaker Normalization Using Cortical Strip Maps: A Neural Model for Steady State Vowel Identification

    Full text link
    Auditory signals of speech are speaker-dependent, but representations of language meaning are speaker-independent. Such a transformation enables speech to be understood from different speakers. A neural model is presented that performs speaker normalization to generate a pitchindependent representation of speech sounds, while also preserving information about speaker identity. This speaker-invariant representation is categorized into unitized speech items, which input to sequential working memories whose distributed patterns can be categorized, or chunked, into syllable and word representations. The proposed model fits into an emerging model of auditory streaming and speech categorization. The auditory streaming and speaker normalization parts of the model both use multiple strip representations and asymmetric competitive circuits, thereby suggesting that these two circuits arose from similar neural designs. The normalized speech items are rapidly categorized and stably remembered by Adaptive Resonance Theory circuits. Simulations use synthesized steady-state vowels from the Peterson and Barney [J. Acoust. Soc. Am. 24, 175-184 (1952)] vowel database and achieve accuracy rates similar to those achieved by human listeners. These results are compared to behavioral data and other speaker normalization models.National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624

    Signal processing methodologies for an acoustic fetal heart rate monitor

    Get PDF
    Research and development is presented of real time signal processing methodologies for the detection of fetal heart tones within a noise-contaminated signal from a passive acoustic sensor. A linear predictor algorithm is utilized for detection of the heart tone event and additional processing derives heart rate. The linear predictor is adaptively 'trained' in a least mean square error sense on generic fetal heart tones recorded from patients. A real time monitor system is described which outputs to a strip chart recorder for plotting the time history of the fetal heart rate. The system is validated in the context of the fetal nonstress test. Comparisons are made with ultrasonic nonstress tests on a series of patients. Comparative data provides favorable indications of the feasibility of the acoustic monitor for clinical use

    Functional roles of synaptic inhibition in auditory temporal processing

    Get PDF

    Nonlinear mechanisms in passive microwave devices

    Get PDF
    Premi extraordinari doctorat curs 2010-2011, àmbit d’Enginyeria de les TICThe telecommunications industry follows a tendency towards smaller devices, higher power and higher frequency, which imply an increase on the complexity of the electronics involved. Moreover, there is a need for extended capabilities like frequency tunable devices, ultra-low losses or high power handling, which make use of advanced materials for these purposes. In addition, increasingly demanding communication standards and regulations push the limits of the acceptable performance degrading indicators. This is the case of nonlinearities, whose effects, like increased Adjacent Channel Power Ratio (ACPR), harmonics, or intermodulation distortion among others, are being included in the performance requirements, as maximum tolerable levels. In this context, proper modeling of the devices at the design stage is of crucial importance in predicting not only the device performance but also the global system indicators and to make sure that the requirements are fulfilled. In accordance with that, this work proposes the necessary steps for circuit models implementation of different passive microwave devices, from the linear and nonlinear measurements to the simulations to validate them. Bulk acoustic wave resonators and transmission lines made of high temperature superconductors, ferroelectrics or regular metals and dielectrics are the subject of this work. Both phenomenological and physical approaches are considered and circuit models are proposed and compared with measurements. The nonlinear observables, being harmonics, intermodulation distortion, and saturation or detuning, are properly related to the material properties that originate them. The obtained models can be used in circuit simulators to predict the performance of these microwave devices under complex modulated signals, or even be used to predict their performance when integrated into more complex systems. A key step to achieve this goal is an accurate characterization of materials and devices, which is faced by making use of advanced measurement techniques. Therefore, considerations on special measurement setups are being made along this thesis.Award-winningPostprint (published version

    On the mechanism of response latencies in auditory nerve fibers

    Get PDF
    Despite the structural differences of the middle and inner ears, the latency pattern in auditory nerve fibers to an identical sound has been found similar across numerous species. Studies have shown the similarity in remarkable species with distinct cochleae or even without a basilar membrane. This stimulus-, neuron-, and species- independent similarity of latency cannot be simply explained by the concept of cochlear traveling waves that is generally accepted as the main cause of the neural latency pattern. An original concept of Fourier pattern is defined, intended to characterize a feature of temporal processing—specifically phase encoding—that is not readily apparent in more conventional analyses. The pattern is created by marking the first amplitude maximum for each sinusoid component of the stimulus, to encode phase information. The hypothesis is that the hearing organ serves as a running analyzer whose output reflects synchronization of auditory neural activity consistent with the Fourier pattern. A combined research of experimental, correlational and meta-analysis approaches is used to test the hypothesis. Manipulations included phase encoding and stimuli to test their effects on the predicted latency pattern. Animal studies in the literature using the same stimulus were then compared to determine the degree of relationship. The results show that each marking accounts for a large percentage of a corresponding peak latency in the peristimulus-time histogram. For each of the stimuli considered, the latency predicted by the Fourier pattern is highly correlated with the observed latency in the auditory nerve fiber of representative species. The results suggest that the hearing organ analyzes not only amplitude spectrum but also phase information in Fourier analysis, to distribute the specific spikes among auditory nerve fibers and within a single unit. This phase-encoding mechanism in Fourier analysis is proposed to be the common mechanism that, in the face of species differences in peripheral auditory hardware, accounts for the considerable similarities across species in their latency-by-frequency functions, in turn assuring optimal phase encoding across species. Also, the mechanism has the potential to improve phase encoding of cochlear implants
    corecore