238 research outputs found

    Auditory cortical responses in the cat to sounds that produce spatial illusions

    Full text link
    Humans and cats can localize a sound source accurately if its spectrum is fairly broad and flat(1-3), as is typical of most natural sounds. However, if sounds are filtered to reduce the width of the spectrum, they result:in illusions of sources that are very different from the actual locations, particularly in the up/down and front/back dimensions(4-6). Such illusions reveal that the auditory system relies on specific characteristics of sound spectra to obtain cues for localization(7). In the-auditory cortex of cats, temporal firing patterns of neurons can signal the locations of broad-band sounds(8-9). Here we show that such spike patterns systematically mislocalize sounds that have been passed through a narrow-band filter. Both correct and incorrect locations signalled by neurons can be predicted quantitatively by a model of spectral processing that also predicts correct and incorrect localization judgements by human listeners(6). Similar cortical mechanisms, if present in humans, could underlie human auditory spatial perception.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/62778/1/399688a0.pd

    Unanesthetized Auditory Cortex Exhibits Multiple Codes for Gaps in Cochlear Implant Pulse Trains

    Get PDF
    Cochlear implant listeners receive auditory stimulation through amplitude-modulated electric pulse trains. Auditory nerve studies in animals demonstrate qualitatively different patterns of firing elicited by low versus high pulse rates, suggesting that stimulus pulse rate might influence the transmission of temporal information through the auditory pathway. We tested in awake guinea pigs the temporal acuity of auditory cortical neurons for gaps in cochlear implant pulse trains. Consistent with results using anesthetized conditions, temporal acuity improved with increasing pulse rates. Unlike the anesthetized condition, however, cortical neurons responded in the awake state to multiple distinct features of the gap-containing pulse trains, with the dominant features varying with stimulus pulse rate. Responses to the onset of the trailing pulse train (Trail-ON) provided the most sensitive gap detection at 1,017 and 4,069 pulse-per-second (pps) rates, particularly for short (25 ms) leading pulse trains. In contrast, under conditions of 254 pps rate and long (200 ms) leading pulse trains, a sizeable fraction of units demonstrated greater temporal acuity in the form of robust responses to the offsets of the leading pulse train (Lead-OFF). Finally, TONIC responses exhibited decrements in firing rate during gaps, but were rarely the most sensitive feature. Unlike results from anesthetized conditions, temporal acuity of the most sensitive units was nearly as sharp for brief as for long leading bursts. The differences in stimulus coding across pulse rates likely originate from pulse rate-dependent variations in adaptation in the auditory nerve. Two marked differences from responses to acoustic stimulation were: first, Trail-ON responses to 4,069 pps trains encoded substantially shorter gaps than have been observed with acoustic stimuli; and second, the Lead-OFF gap coding seen for <15 ms gaps in 254 pps stimuli is not seen in responses to sounds. The current results may help to explain why moderate pulse rates around 1,000 pps are favored by many cochlear implant listeners

    Acoustic Cues for Sound Source Distance and Azimuth in Rabbits, a Racquetball and a Rigid Spherical Model

    Get PDF
    There are numerous studies measuring the transfer functions representing signal transformation between a source and each ear canal, i.e., the head-related transfer functions (HRTFs), for various species. However, only a handful of these address the effects of sound source distance on HRTFs. This is the first study of HRTFs in the rabbit where the emphasis is on the effects of sound source distance and azimuth on HRTFs. With the rabbit placed in an anechoic chamber, we made acoustic measurements with miniature microphones placed deep in each ear canal to a sound source at different positions (10–160 cm distance, ±150° azimuth). The sound was a logarithmically swept broadband chirp. For comparisons, we also obtained the HRTFs from a racquetball and a computational model for a rigid sphere. We found that (1) the spectral shape of the HRTF in each ear changed with sound source location; (2) interaural level difference (ILD) increased with decreasing distance and with increasing frequency. Furthermore, ILDs can be substantial even at low frequencies when distance is close; and (3) interaural time difference (ITD) decreased with decreasing distance and generally increased with decreasing frequency. The observations in the rabbit were reproduced, in general, by those in the racquetball, albeit greater in magnitude in the rabbit. In the sphere model, the results were partly similar and partly different than those in the racquetball and the rabbit. These findings refute the common notions that ILD is negligible at low frequencies and that ITD is constant across frequency. These misconceptions became evident when distance-dependent changes were examined

    Spectrotemporal Processing in Spectral Tuning Modules of Cat Primary Auditory Cortex

    Get PDF
    Spectral integration properties show topographical order in cat primary auditory cortex (AI). Along the iso-frequency domain, regions with predominantly narrowly tuned (NT) neurons are segregated from regions with more broadly tuned (BT) neurons, forming distinct processing modules. Despite their prominent spatial segregation, spectrotemporal processing has not been compared for these regions. We identified these NT and BT regions with broad-band ripple stimuli and characterized processing differences between them using both spectrotemporal receptive fields (STRFs) and nonlinear stimulus/firing rate transformations. The durations of STRF excitatory and inhibitory subfields were shorter and the best temporal modulation frequencies were higher for BT neurons than for NT neurons. For NT neurons, the bandwidth of excitatory and inhibitory subfields was matched, whereas for BT neurons it was not. Phase locking and feature selectivity were higher for NT neurons. Properties of the nonlinearities showed only slight differences across the bandwidth modules. These results indicate fundamental differences in spectrotemporal preferences - and thus distinct physiological functions - for neurons in BT and NT spectral integration modules. However, some global processing aspects, such as spectrotemporal interactions and nonlinear input/output behavior, appear to be similar for both neuronal subgroups. The findings suggest that spectral integration modules in AI differ in what specific stimulus aspects are processed, but they are similar in the manner in which stimulus information is processed

    Foraging Behavior under Starvation Conditions Is Altered via Photosynthesis by the Marine Gastropod, Elysia clarki

    Get PDF
    It has been well documented that nutritional state can influence the foraging behavior of animals. However, photosynthetic animals, those capable of both heterotrophy and symbiotic photosynthesis, may have a delayed behavioral response due to their ability to photosynthesize. To test this hypothesis we subjected groups of the kleptoplastic sea slug, Elysia clarki, to a gradient of starvation treatments of 4, 8, and 12 weeks plus a satiated control. Compared to the control group, slugs starved 8 and 12 weeks displayed a significant increase in the proportion of slugs feeding and a significant decrease in photosynthetic capability, as measured in maximum quantum yield and [chl a]. The 4 week group, however, showed no significant difference in feeding behavior or in the metrics of photosynthesis compared to the control. This suggests that photosynthesis in E. clarki, thought to be linked to horizontally-transferred algal genes, delays a behavioral response to starvation. This is the first demonstration of a link between photosynthetic capability in an animal and a modification of foraging behavior under conditions of starvation

    Low is large: spatial location and pitch interact in voice-based body size estimation

    Get PDF
    The binding of incongruent cues poses a challenge for multimodal perception. Indeed, although taller objects emit sounds from higher elevations, low-pitched sounds are perceptually mapped both to large size and to low elevation. In the present study, we examined how these incongruent vertical spatial cues (up is more) and pitch cues (low is large) to size interact, and whether similar biases influence size perception along the horizontal axis. In Experiment 1, we measured listeners’ voice-based judgments of human body size using pitch-manipulated voices projected from a high versus a low, and a right versus a left, spatial location. Listeners associated low spatial locations with largeness for lowered-pitch but not for raised-pitch voices, demonstrating that pitch overrode vertical-elevation cues. Listeners associated rightward spatial locations with largeness, regardless of voice pitch. In Experiment 2, listeners performed the task while sitting or standing, allowing us to examine self-referential cues to elevation in size estimation. Listeners associated vertically low and rightward spatial cues with largeness more for lowered- than for raised-pitch voices. These correspondences were robust to sex (of both the voice and the listener) and head elevation (standing or sitting); however, horizontal correspondences were amplified when participants stood. Moreover, when participants were standing, their judgments of how much larger men’s voices sounded than women’s increased when the voices were projected from the low speaker. Our results provide novel evidence for a multidimensional spatial mapping of pitch that is generalizable to human voices and that affects performance in an indirect, ecologically relevant spatial task (body size estimation). These findings suggest that crossmodal pitch correspondences evoke both low-level and higher-level cognitive processes

    Effect of Audiovisual Training on Monaural Spatial Hearing in Horizontal Plane

    Get PDF
    The article aims to test the hypothesis that audiovisual integration can improve spatial hearing in monaural conditions when interaural difference cues are not available. We trained one group of subjects with an audiovisual task, where a flash was presented in parallel with the sound and another group in an auditory task, where only sound from different spatial locations was presented. To check whether the observed audiovisual effect was similar to feedback, the third group was trained using the visual feedback paradigm. Training sessions were administered once per day, for 5 days. The performance level in each group was compared for auditory only stimulation on the first and the last day of practice. Improvement after audiovisual training was several times higher than after auditory practice. The group trained with visual feedback demonstrated a different effect of training with the improvement smaller than the group with audiovisual training. We conclude that cross-modal facilitation is highly important to improve spatial hearing in monaural conditions and may be applied to the rehabilitation of patients with unilateral deafness and after unilateral cochlear implantation

    Egocentric and allocentric representations in auditory cortex

    Get PDF
    A key function of the brain is to provide a stable representation of an object’s location in the world. In hearing, sound azimuth and elevation are encoded by neurons throughout the auditory system, and auditory cortex is necessary for sound localization. However, the coordinate frame in which neurons represent sound space remains undefined: classical spatial receptive fields in head-fixed subjects can be explained either by sensitivity to sound source location relative to the head (egocentric) or relative to the world (allocentric encoding). This coordinate frame ambiguity can be resolved by studying freely moving subjects; here we recorded spatial receptive fields in the auditory cortex of freely moving ferrets. We found that most spatially tuned neurons represented sound source location relative to the head across changes in head position and direction. In addition, we also recorded a small number of neurons in which sound location was represented in a world-centered coordinate frame. We used measurements of spatial tuning across changes in head position and direction to explore the influence of sound source distance and speed of head movement on auditory cortical activity and spatial tuning. Modulation depth of spatial tuning increased with distance for egocentric but not allocentric units, whereas, for both populations, modulation was stronger at faster movement speeds. Our findings suggest that early auditory cortex primarily represents sound source location relative to ourselves but that a minority of cells can represent sound location in the world independent of our own position
    corecore