79 research outputs found

    Ventral and dorsal streams in the evolution of speech and language

    Get PDF
    The brains of humans and old-world monkeys show a great deal of anatomical similarity. The auditory cortical system, for instance, is organized into a ventral and a dorsal pathway in both species. A fundamental question with regard to the evolution of speech and language (as well as music) is whether human and monkey brains show principal differences in their organization (e.g., new pathways appearing as a result of a single mutation), or whether species differences are of a more subtle, quantitative nature. There is little doubt about a similar role of the ventral auditory pathway in both humans and monkeys in the decoding of spectrally complex sounds, which some authors have referred to as auditory object recognition. This includes the decoding of speech sounds (“speech perception”) and their ultimate linking to meaning in humans. The originally presumed role of the auditory dorsal pathway in spatial processing, by analogy to the visual dorsal pathway, has recently been conceptualized into a more general role in sensorimotor integration and control. Specifically for speech, the dorsal processing stream plays a role in speech production as well as categorization of phonemes during on-line processing of speech

    Is there a tape recorder in your head? How the brain stores and retrieves musical melodies

    Get PDF
    Music consists of strings of sound that vary over time. Technical devices, such as tape recorders, store musical melodies by transcribing event times of temporal sequences into consecutive locations on the storage medium. Playback occurs by reading out the stored information in the same sequence. However, it is unclear how the brain stores and retrieves auditory sequences. Neurons in the anterior lateral belt of auditory cortex are sensitive to the combination of sound features in time, but the integration time of these neurons is not sufficient to store longer sequences that stretch over several seconds, minutes or more. Functional imaging studies in humans provide evidence that music is stored instead within the auditory dorsal stream, including premotor and prefrontal areas. In monkeys, these areas are the substrate for learning of motor sequences. It appears, therefore, that the auditory dorsal stream transforms musical into motor sequence information and vice versa, realizing what are known as forward and inverse models. The basal ganglia and the cerebellum are involved in setting up the sensorimotor associations, translating timing information into spatial codes and back again

    3MRC Cognition and Brain Sciences Unit

    Get PDF
    Acoustic sequences such as speech and music are generally perceived as coherent auditory “streams,” which can be individually attended to and followed over time. Although the psychophysical stimulus parameters governing this “auditory streaming ” are well established, the brain mechanisms underlying the formation of auditory streams remain largely unknown. In particular, an essential feature of the phenomenon, which corresponds to the fact that the segregation of sounds into streams typically takes several seconds to build up, remains unexplained. Here, we show that this and other major features of auditorystream formation measured in humans using alternating-tone sequences can be quantitatively accounted for based on single-unit responses recorded in the primary auditory cortex (A1) of awake rhesus monkeys listening to the same sound sequences

    Does tinnitus depend on time-of-day? An ecological momentary assessment study with the “TrackYourTinnitus” application

    Get PDF
    Only few previous studies used ecological momentary assessments to explore the time-of-day-dependence of tinnitus. The present study used data from the mobile application “TrackYourTinnitus” to explore whether tinnitus loudness and tinnitus distress fluctuate within a 24-h interval. Multilevel models were performed to account for the nested structure of assessments (level 1: 17,209 daily life assessments) nested within days (level 2: 3,570 days with at least three completed assessments), and days nested within participants (level 3: 350 participants). Results revealed a time-of-day-dependence of tinnitus. In particular, tinnitus was perceived as louder and more distressing during the night and early morning hours (from 12 A.M. – 8 A.M.) than during the upcoming day. Since previous studies suggested that stress (and stress-associated hormones) show a circadian rhythm and this might influence the time-of-day-dependence of tinnitus, we evaluated whether the described results change when statistically controlling for subjectively reported stress-levels. Correcting for subjective stress-levels, however, did not change the result that tinnitus (loudness and distress) was most severe at night and early morning. These results show that time-of-day contributes to the level of both tinnitus loudness and tinnitus distress. Possible implications of our results for the clinical management of tinnitus are that tailoring the timing of therapeutic interventions to the circadian rhythm of individual patients (chronotherapy) might be promising

    Segregation of Vowels and Consonants in Human Auditory Cortex: Evidence for Distributed Hierarchical Organization

    Get PDF
    The speech signal consists of a continuous stream of consonants and vowels, which must be de- and encoded in human auditory cortex to ensure the robust recognition and categorization of speech sounds. We used small-voxel functional magnetic resonance imaging to study information encoded in local brain activation patterns elicited by consonant-vowel syllables, and by a control set of noise bursts. First, activation of anterior–lateral superior temporal cortex was seen when controlling for unspecific acoustic processing (syllables versus band-passed noises, in a “classic” subtraction-based design). Second, a classifier algorithm, which was trained and tested iteratively on data from all subjects to discriminate local brain activation patterns, yielded separations of cortical patches discriminative of vowel category versus patches discriminative of stop-consonant category across the entire superior temporal cortex, yet with regional differences in average classification accuracy. Overlap (voxels correctly classifying both speech sound categories) was surprisingly sparse. Third, lending further plausibility to the results, classification of speech–noise differences was generally superior to speech–speech classifications, with the no\ exception of a left anterior region, where speech–speech classification accuracies were significantly better. These data demonstrate that acoustic–phonetic features are encoded in complex yet sparsely overlapping local patterns of neural activity distributed hierarchically across different regions of the auditory cortex. The redundancy apparent in these multiple patterns may partly explain the robustness of phonemic representations

    Cortico-limbic morphology separates tinnitus from tinnitus distress

    Get PDF
    Tinnitus is a common auditory disorder characterized by a chronic ringing or buzzing “in the ear.”Despite the auditory-perceptual nature of this disorder, a growing number of studies have reported neuroanatomical differences in tinnitus patients outside the auditory-perceptual system. Some have used this evidence to characterize chronic tinnitus as dysregulation of the auditory system, either resulting from inefficient inhibitory control or through the formation of aversive associations with tinnitus. It remains unclear, however, whether these “non-auditory” anatomical markers of tinnitus are related to the tinnitus signal itself, or merely to negative emotional reactions to tinnitus (i.e., tinnitus distress). In the current study, we used anatomical MRI to identify neural markers of tinnitus, and measured their relationship to a variety of tinnitus characteristics and other factors often linked to tinnitus, such as hearing loss, depression, anxiety, and noise sensitivity. In a new cohort of participants, we confirmed that people with chronic tinnitus exhibit reduced gray matter in ventromedial prefrontal cortex (vmPFC) compared to controls matched for age and hearing loss. This effect was driven by reduced cortical surface area, and was not related to tinnitus distress, symptoms of depression or anxiety, noise sensitivity, or other factors. Instead, tinnitus distress was positively correlated with cortical thickness in the anterior insula in tinnitus patients, while symptoms of anxiety and depression were negatively correlated with cortical thickness in subcallosal anterior cingulate cortex (scACC) across all groups. Tinnitus patients also exhibited increased gyrification of dorsomedial prefrontal cortex (dmPFC), which was more severe in those patients with constant (vs. intermittent) tinnitus awareness. Our data suggest that the neural systems associated with chronic tinnitus are different from those involved in aversive or distressed reactions to tinnitus

    Vocal gestures and auditory objects

    No full text

    The Development of Intersensory Perception: Comparative Perspectives

    No full text
    corecore