20 research outputs found

    Auditory cortical responses in the cat to sounds that produce spatial illusions

    Full text link
    Humans and cats can localize a sound source accurately if its spectrum is fairly broad and flat(1-3), as is typical of most natural sounds. However, if sounds are filtered to reduce the width of the spectrum, they result:in illusions of sources that are very different from the actual locations, particularly in the up/down and front/back dimensions(4-6). Such illusions reveal that the auditory system relies on specific characteristics of sound spectra to obtain cues for localization(7). In the-auditory cortex of cats, temporal firing patterns of neurons can signal the locations of broad-band sounds(8-9). Here we show that such spike patterns systematically mislocalize sounds that have been passed through a narrow-band filter. Both correct and incorrect locations signalled by neurons can be predicted quantitatively by a model of spectral processing that also predicts correct and incorrect localization judgements by human listeners(6). Similar cortical mechanisms, if present in humans, could underlie human auditory spatial perception.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/62778/1/399688a0.pd

    Learning the Optimal Control of Coordinated Eye and Head Movements

    Get PDF
    Various optimality principles have been proposed to explain the characteristics of coordinated eye and head movements during visual orienting behavior. At the same time, researchers have suggested several neural models to underly the generation of saccades, but these do not include online learning as a mechanism of optimization. Here, we suggest an open-loop neural controller with a local adaptation mechanism that minimizes a proposed cost function. Simulations show that the characteristics of coordinated eye and head movements generated by this model match the experimental data in many aspects, including the relationship between amplitude, duration and peak velocity in head-restrained and the relative contribution of eye and head to the total gaze shift in head-free conditions. Our model is a first step towards bringing together an optimality principle and an incremental local learning mechanism into a unified control scheme for coordinated eye and head movements

    Monkeys and Humans Share a Common Computation for Face/Voice Integration

    Get PDF
    Speech production involves the movement of the mouth and other regions of the face resulting in visual motion cues. These visual cues enhance intelligibility and detection of auditory speech. As such, face-to-face speech is fundamentally a multisensory phenomenon. If speech is fundamentally multisensory, it should be reflected in the evolution of vocal communication: similar behavioral effects should be observed in other primates. Old World monkeys share with humans vocal production biomechanics and communicate face-to-face with vocalizations. It is unknown, however, if they, too, combine faces and voices to enhance their perception of vocalizations. We show that they do: monkeys combine faces and voices in noisy environments to enhance their detection of vocalizations. Their behavior parallels that of humans performing an identical task. We explored what common computational mechanism(s) could explain the pattern of results we observed across species. Standard explanations or models such as the principle of inverse effectiveness and a “race” model failed to account for their behavior patterns. Conversely, a “superposition model”, positing the linear summation of activity patterns in response to visual and auditory components of vocalizations, served as a straightforward but powerful explanatory mechanism for the observed behaviors in both species. As such, it represents a putative homologous mechanism for integrating faces and voices across primates
    corecore