62 research outputs found

    Pitch representations in the auditory nerve : two concurrent complex tones

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.Includes bibliographical references (p. 39-43).Pitch differences between concurrent sounds are important cues used in auditory scene analysis and also play a major role in music perception. To investigate the neural codes underlying these perceptual abilities, we recorded from single fibers in the cat auditory nerve in response to two concurrent harmonic complex tones with missing fundamentals and equal-amplitude harmonics. We investigated the efficacy of rate-place and interspike-interval codes to represent both pitches of the two tones, which had fundamental frequency (FO) ratios of 15/14 or 11/9. We relied on the principle of scaling invariance in cochlear mechanics to infer the spatiotemporal response patterns to a given stimulus from a series of measurements made in a single fiber as a function of FO. Templates created by a peripheral auditory model were used to estimate the FOs of double complex tones from the inferred distribution of firing rate along the tonotopic axis. This rate-place representation was accurate for FOs above about 900 Hz. Surprisingly, rate-based FO estimates were accurate even when the two-tone mixture contained no resolved harmonics, so long as some harmonics were resolved prior to mixing. We also extended methods used previously for single complex tones to estimate the FOs of concurrent complex tones from interspike-interval distributions pooled over the tonotopic axis. The interval-based representation was accurate for FOs below about 900 Hz, where the two-tone mixture contained no resolved harmonics. Together, the rate-place and interval-based representations allow accurate pitch perception for concurrent sounds over the entire range of human voice and cat vocalizations.by Erik Larsen.S.M

    Neural Coding of Sound Envelope in Reverberant Environments

    Get PDF
    Speech reception depends critically on temporal modulations in the amplitude envelope of the speech signal. Reverberation encountered in everyday environments can substantially attenuate these modulations. To assess the effect of reverberation on the neural coding of amplitude envelope, we recorded from single units in the inferior colliculus (IC) of unanesthetized rabbit using sinusoidally amplitude modulated (AM) broadband noise stimuli presented in simulated anechoic and reverberant environments. Although reverberation degraded both rate and temporal coding of AM in IC neurons, in most neurons, the degradation in temporal coding was smaller than the AM attenuation in the stimulus. This compensation could largely be accounted for by the compressive shape of the modulation input–output function (MIOF), which describes the nonlinear transformation of modulation depth from acoustic stimuli into neural responses. Additionally, in a subset of neurons, the temporal coding of AM was better for reverberant stimuli than for anechoic stimuli having the same modulation depth at the ear. Using hybrid anechoic stimuli that selectively possess certain properties of reverberant sounds, we show that this reverberant advantage is not caused by envelope distortion, static interaural decorrelation, or spectral coloration. Overall, our results suggest that the auditory system may possess dual mechanisms that make the coding of amplitude envelope relatively robust in reverberation: one general mechanism operating for all stimuli with small modulation depths, and another mechanism dependent on very specific properties of reverberant stimuli, possibly the periodic fluctuations in interaural correlation at the modulation frequency.National Institutes of Health (U.S.) (Grant R01DC002258)National Institutes of Health (U.S.) (Grant P30DC0005209)Paul and Daisy Soros Fellowships for New American

    A Point Process Model for Auditory Neurons Considering Both Their Intrinsic Dynamics and the Spectrotemporal Properties of an Extrinsic Signal

    Get PDF
    We propose a point process model of spiking activity from auditory neurons. The model takes account of the neuron's intrinsic dynamics as well as the spectrotemporal properties of an input stimulus. A discrete Volterra expansion is used to derive the form of the conditional intensity function. The Volterra expansion models the neuron's baseline spike rate, its intrinsic dynamics-spiking history-and the stimulus effect which in this case is the analog of the spectrotemporal receptive field (STRF). We performed the model fitting efficiently in a generalized linear model framework using ridge regression to address properly this ill-posed maximum likelihood estimation problem. The model provides an excellent fit to spiking activity from 55 auditory nerve neurons. The STRF-like representation estimated jointly with the neuron's intrinsic dynamics may offer more accurate characterizations of neural activity in the auditory system than current ones based solely on the STRF

    Accurate Sound Localization in Reverberant Environments Is Mediated by Robust Encoding of Spatial Cues in the Auditory Midbrain

    Get PDF
    In reverberant environments, acoustic reflections interfere with the direct sound arriving at a listener's ears, distorting the spatial cues for sound localization. Yet, human listeners have little difficulty localizing sounds in most settings. Because reverberant energy builds up over time, the source location is represented relatively faithfully during the early portion of a sound, but this representation becomes increasingly degraded later in the stimulus. We show that the directional sensitivity of single neurons in the auditory midbrain of anesthetized cats follows a similar time course, although onset dominance in temporal response patterns results in more robust directional sensitivity than expected, suggesting a simple mechanism for improving directional sensitivity in reverberation. In parallel behavioral experiments, we demonstrate that human lateralization judgments are consistent with predictions from a population rate model decoding the observed midbrain responses, suggesting a subcortical origin for robust sound localization in reverberant environments.National Institutes of Health (U.S.) (Grant R01 DC002258)National Institutes of Health (U.S.) (Grant R01 DC05778-02)core National Institutes of Health (U.S.) (Eaton Peabody Laboratory. (Core) Grant P30 DC005209)National Institutes of Health (U.S.) (Grant T32 DC0003

    Neural encoding of sound source location in the presence of a concurrent, spatially separated source

    Get PDF
    Day ML, Koka K, Delgutte B. Neural encoding of sound source location in the presence of a concurrent, spatially separated source

    Speech Communication

    Get PDF
    Contains research objectives and summary of research on four research projects.National Institutes of Health (Grant 5 RO1 NS04332-14)National Institutes of Health (Grant 5 T32 NS07040-02)National Institutes of Health (Fellowship 1 F22 NS00796-01)National Institutes of Health (Grant 1 ROI NS13028-01)National Institutes of Health (Grant 5 T3Z NS07040-02)National Institutes of Health (Fellowship 1 F22 MH58258-02)U. S. Army- Maryland Procurement Office (Contract MDA904-76-C-0331

    Speech Communication

    Get PDF
    Contains reports on four research projects.National Institutes of Health (Grant 5 RO1 NS04332-15)National Institutes of Health (Grant 5 T32 NS07040-03)National Institutes of Health (Grant 5 RO1 NS13028-02)National Science Foundation (Grant BNS76-80278

    Speech Communication

    Get PDF
    Contains research objectives and summary of research on six research projects and reports on three research projects.National Institutes of Health (Grant 5 RO1 NS04332-13)National Institutes of Health (Fellowship 1 F22 MH5825-01)National Institutes of Health (Grant 1 T32 NS07040-01)National Institutes of Health (Fellowship 1 F22 NS007960)National Institutes of Health (Fellowship 1 F22 HD019120)National Institutes of Health (Fellowship 1 F22 HD01919-01)U. S. Army (Contract DAAB03-75-C-0489)National Institutes of Health (Grant 5 RO1 NS04332-12

    Signal Transmission in the Auditory System

    Get PDF
    Contains table of contents for Section 3, an introduction and reports on seven research projects.National Institutes of Health Grant P01-DC-00119National Institutes of Health Grant R01-DC-00194National Institutes of Health Grant R01 DC00238National Institutes of Health Grant R01-DC02258National Institutes of Health Grant T32-DC00038National Institutes of Health Grant P01-DC00361National Institutes of Health Grant 2RO1 DC00235National Institutes of Health Contract N01-DC2240

    Speech Communication

    Get PDF
    Contains reports on three research projects.National Institutes of Health (Grant 2 ROI NS04332)National Institutes of Health (Training Grant 5 T32 NS07040)C. J. LeBel FellowshipsNational Institutes of Health (Grant 5 RO1 NS13028)National Science Foundation (Grant BNS76-80278)National Science Foundation (Grant BNS77-26871
    • …
    corecore