9 research outputs found

    Neural representation of spectral and temporal information in speech

    No full text
    Speech is the most interesting and one of the most complex sounds dealt with by the auditory system. The neural representation of speech needs to capture those features of the signal on which the brain depends in language communication. Here we describe the representation of speech in the auditory nerve and in a few sites in the central nervous system from the perspective of the neural coding of important aspects of the signal. The representation is tonotopic, meaning that the speech signal is decomposed by frequency and different frequency components are represented in different populations of neurons. Essential to the representation are the properties of frequency tuning and nonlinear suppression. Tuning creates the decomposition of the signal by frequency, and nonlinear suppression is essential for maintaining the representation across sound levels. The representation changes in central auditory neurons by becoming more robust against changes in stimulus intensity and more transient. However, it is probable that the form of the representation at the auditory cortex is fundamentally different from that at lower levels, in that stimulus features other than the distribution of energy across frequency are analysed

    Learning to discriminate interaural time differences at low and high frequencies

    No full text
    This study investigated learning, in normal-hearing adults, associated with training (i.e. repeated practice) on the discrimination of ongoing interaural time difference (ITD). Specifically, the study addressed an apparent disparity in the conclusions of previous studies, which reported training-induced learning at high frequencies but not at low frequencies. Twenty normal-hearing adults were trained with either low- or high-frequency stimuli, associated with comparable asymptotic thresholds, or served as untrained controls. Overall, trained listeners learnt more than controls and over multiple sessions. The magnitudes and time-courses of learning with the lowand high-frequency stimuli were similar. While this is inconsistent with the conclusion of a previous study with low-frequency ITD, this previous conclusion may not be justified by the results reported. Generalization of learning across frequency was found, although more detailed investigations of stimulus-specific learning are warranted. Overall, the results are consistent with the notion that ongoing ITD processing is functionally uniform across frequency. These results may have implications for clinical populations, such as users of bilateral cochlear implants
    corecore