40 research outputs found

    Homology and Specificity of Natural Sound-Encoding in Human and Monkey Auditory Cortex

    Get PDF
    Understanding homologies and differences in auditory cortical processing in human and nonhuman primates is an essential step in elucidating the neurobiology of speech and language. Using fMRI responses to natural sounds, we investigated the representation of multiple acoustic features in auditory cortex of awake macaques and humans. Comparative analyses revealed homologous large-scale topographies not only for frequency but also for temporal and spectral modulations. In both species, posterior regions preferably encoded relatively fast temporal and coarse spectral information, whereas anterior regions encoded slow temporal and fine spectral modulations. Conversely, we observed a striking interspecies difference in cortical sensitivity to temporal modulations: While decoding from macaque auditory cortex was most accurate at fast rates (> 30 Hz), humans had highest sensitivity to ~3 Hz, a relevant rate for speech analysis. These findings suggest that characteristic tuning of human auditory cortex to slow temporal modulations is unique and may have emerged as a critical step in the evolution of speech and language

    The Spectrotemporal Filter Mechanism of Auditory Selective Attention

    Get PDF
    SummaryAlthough we have convincing evidence that attention to auditory stimuli modulates neuronal responses at or before the level of primary auditory cortex (A1), the underlying physiological mechanisms are unknown. We found that attending to rhythmic auditory streams resulted in the entrainment of ongoing oscillatory activity reflecting rhythmic excitability fluctuations in A1. Strikingly, although the rhythm of the entrained oscillations in A1 neuronal ensembles reflected the temporal structure of the attended stream, the phase depended on the attended frequency content. Counter-phase entrainment across differently tuned A1 regions resulted in both the amplification and sharpening of responses at attended time points, in essence acting as a spectrotemporal filter mechanism. Our data suggest that selective attention generates a dynamically evolving model of attended auditory stimulus streams in the form of modulatory subthreshold oscillations across tonotopically organized neuronal ensembles in A1 that enhances the representation of attended stimuli

    The Neural and Behavioral Correlates of Auditory Streaming

    Get PDF
    Perceptual representations of auditory stimuli—which are called auditory streams or objects—are derived from the auditory system\u27s ability to segregate and group stimuli based upon spectral, temporal, and spatial features. However, it remains unclear how our auditory system encodes these auditory streams at the level of the single neuron. In order to address this question directly, we first validated an animal model of auditory streaming. Specifically, we trained rhesus macaques to report their streaming percept using methodologies and controls similar to those presented in previous human studies. We found that the monkeys\u27 behavioral reports were qualitatively consistent with those of human listeners. Next, we recorded from neurons in the primary auditory cortex while monkeys simultaneously reported their streaming percepts. We found that A1 neurons had frequency-tuned responses that habituated, independent of frequency content, as the auditory sequence unfolded over time; and we report for the first time that firing rate of A1 neurons was modulated by the monkeys’ choices. This modulation increased with listening time and was independent of the frequency difference between consecutive tone bursts. Overall, our results suggest that A1 activity contributes to the sensory evidence underlying the segregation and grouping of acoustic stimuli into distinct auditory streams. However, because we observe choice-related activity based upon firing rate alone, our data are at partially at odds with Micheyl et al.’s (2005) prominent hypothesis, which argued that frequency-dependent habituation may be a coding mechanism for the streaming percept

    The effect of listening tasks and motor responding on activation in The auditory cortex

    Get PDF
    Previous human functional magnetic resonance imaging (fMRI) research has shown that activation in the auditory cortex (AC) is strongly modulated by motor influences. Other fMRI studies have indicated that the AC is also modulated by attention-engaging listening tasks. How these motor- and task-related activation modulations relate to each other has, however, not been previously studied. The current understanding of the functional organization of the human AC is strongly based on primate models. However, some authors have recently questioned the correspondence between the monkey and human cognitive systems, and whether the monkey AC can be used as a model for the human AC. Further, it is unknown whether active listening modulates activations similarly in the human and nonhuman primate AC. Thus, non-human primate fMRI studies are important. Yet, such fMRI studies have been previously impeded by the difficulty in teaching tasks to non-human primates. The present thesis consists of three studies in which fMRI was used both to investigate the relationship between the effects related to active listening and motor responding in the human AC and to investigate task-related activation modulations in the monkey AC. Study I investigated the effect of manual responding on activation in the human AC during auditory and visual tasks, whereas Study II focused on the question whether auditory-motor effects interact with those related to active listening tasks in the AC and adjacent regions. In Study III, a novel paradigm was developed and used during fMRI to investigate auditory task-dependent modulations in the monkey AC. The results of Study I showed that activation in the AC in humans is strongly suppressed when subjects respond to targets using precision or power grips during both visual and auditory tasks. AC activation was also modulated by grip type during the auditory task but not during the visual task (with identical stimuli and motor responses). These manual-motor effects were distinct from general attention-related modulations revealed by comparing activation during auditory and visual tasks. Study II showed that activation in widespread regions in the AC and inferior parietal lobule (IPL) depends on whether subjects respond to target vowel pairs using vocal or manual responses. Furthermore, activation in the posterior AC and the IPL depends on whether subjects respond by overtly repeating the last vowel of a target pair or by producing a given response vowel. Discrimination tasks activated superior temporal gyrus (STG) regions more strongly than 2-back tasks, while the IPL was activated more strongly by 2-back tasks. These task-related (discrimination vs. 2-back) modulations were distinct from the response type effects in the AC. However, task and motor-response-type effects interacted in the IPL. Together the results of Studies I and II support the view that operations in the AC are shaped by its connections with motor cortical regions and that regions in the posterior AC are important in auditory-motor integration. Furthermore, these studies also suggest that the task, motor-response-type and vocal-response-type effects are caused by independent mechanisms in the AC. In Study III, a novel reward-cue paradigm was developed to teach macaque monkeys to perform an auditory task. Using this paradigm monkeys learned to perform an auditory task in a few weeks, whereas in previous studies auditory task training has required months or years of training. This new paradigm was then used during fMRI to measure activation in the monkey AC during active auditory task performance. The results showed that activation in the monkey AC is modulated during this task in a similar way as previously seen in human auditory attention studies. The findings of Study III provide an important step in bridging the gap between human and animal studies of the AC.Tidigare forskning med funktionell magnetresonanstomografi (fMRI) har visat att aktiveringen i hörselhjärnbarken hos människor är starkt påverkad av motoriken. Andra fMRI-studier visar att aktiveringen i hörselhjärnbarken också påverkas av uppgifter som kräver aktivt lyssnande. Man vet ändå inte hur dessa motoriska och uppgiftsrelaterade effekter hänger ihop. Den nuvarande uppfattningen om hörselhjärnbarkens funktionella struktur hos människan är starkt påverkad av primatmodeller. Däremot har en del forskare nyligen ifrågasatt om apors kognitiva system motsvarar människans, och specifikt huruvida apans hörselhjärnbark kan användas som modell för människans. Dessutom vet man inte om aktivt lyssnande påverkar aktivering i hörselhjärnbarken hos apor på samma sätt som hos människor. Därför är fMRI-studier på apor viktiga. Sådana fMRI-studier har emellertid tidigare hindrats av svårigheten att lära apor att göra uppgifter. Denna doktorsavhandling utgörs av tre studier där man använde fMRI för att undersöka hur effekter som är relaterade till aktivt lyssnande och motorik förhåller sig till varandra i hörselhjärnbarken hos människan och hur aktiva uppgifter påverkar aktiveringar i hörselhjärnbarken hos apor. I Studie I undersöktes hur aktiveringen i hörselhjärnbarken hos människan påverkades medan försökspersonerna utförde auditiva och visuella uppgifter och gav sina svar manuellt. Studie II fokuserade på huruvida audiomotoriska effekter och effekter relaterade till aktiva hörseluppgifter samspelade i hörselhjärnbarken och dess omnejd. I Studie III utvecklades ett nytt försöksparadigm som sedermera användes för att undersöka auditiva uppgiftsrelaterade aktiveringar i hörselhjärnbarken hos apor. Resultaten av Studie I visade att aktiveringen i hörselhjärnbarken dämpas starkt när försökspersonerna reagerar på målstimulus med precisions- och styrkegrepp både vid auditiva och visuella uppgifter. Aktivering i hörselhjärnbarken påverkas också av typen av grepp då försökspersonerna utför auditiva uppgifter men inte då de utför visuella uppgifter (med identiska stimuli och motoriska reaktioner). Dessa manuellt-motoriska effekter kunde särskiljas från allmänna uppmärksamhetsrelaterade effekter, vilka kom fram då man jämförde aktiveringen under auditiva och visuella uppgifter. Typen av motoriska reaktioner, dvs. hur försökspersonerna reagerade på målstimuli (genom att reagera med händerna eller att uttala ljud) påverkade aktiveringen i stora områden i hörselhjärnbarken och lobulus parietale inferior (IPL) i Studie II. Aktiveringen i den bakre delen av hörselhjärnbarken och IPL påverkades också av om försökspersonen upprepade målstimulusens sista vokal eller svarade genom att uttala en given responsvokal. Diskriminationsuppgifter aktiverade gyrus temporale superior mera än 2-back (minnes) -uppgifter, medan IPL aktiverades mera av 2-back -uppgifterna. Dessa uppgiftsrelaterade (diskrimination vs. 2-back) påverkningar var oberoende av effekter som hade att göra med reaktionstypen i hörselhjärnbarken. Däremot fanns det ett samspel mellan uppgift och motoriska effekter i IPL. Tillsammans stärker resultaten från Studie I och II uppfattningen att funktioner inom hörselhjärnbarken är starkt beroende av dess sammankoppling med den motoriska hjärnbarken, och att bakre delarna av hörselhjärnbarken är viktiga för audiomotorisk integration. Dessa studier visar därtill att uppgiftsrelaterade, motoriska och uttalsrelaterade effekter produceras av oberoende mekanismer i hörselhjärnbarken. I Studie III utvecklades ett nytt försöksparadigm som var baserat på belöningssignaler. Med detta försöksparadigm lärdes makakapor att utföra en auditiv uppgift. I Studie III lärde sig makakaporna uppgiften inom ett par veckor, medan inlärningen av auditiva uppgifter i tidigare studier har tagit upp till flera år. Detta paradigm användes sedan med hjälp av fMRI för att mäta aktivering inom hörselhjärnbarken hos apor, medan aporna utförde aktiva auditiva uppgifter. Resultaten visar att aktiveringen i hörselhjärnbarken hos apor påverkas av uppgifter på liknande sätt som man tidigare har visat i människoforskning. Fynden i Studie II är ett viktigt framsteg för att kunna överbygga gapet mellan människostudier och djurstudier gällande hörselhjärnbarken

    Speech-brain synchronization: a possible cause for developmental dyslexia

    Get PDF
    152 p.Dyslexia is a neurological learning disability characterized by the difficulty in an individual¿s ability to read despite adequate intelligence and normal opportunities. The majority of dyslexic readers present phonological difficulties. The phonological difficulty most often associated with dyslexia is a deficit in phonological awareness, that is, the ability to hear and manipulate the sound structure of language. Some appealing theories of dyslexia attribute a causal role to auditory atypical oscillatory neural activity, suggesting it generates some of the phonological problems in dyslexia. These theories propose that auditory cortical oscillations of dyslexic individuals entrain less accurately to the spectral properties of auditory stimuli at distinct frequency bands (delta, theta and gamma) that are important for speech processing. Nevertheless, there are diverging hypotheses concerning the specific bands that would be disrupted in dyslexia, and which are the consequences of such difficulties on speech processing. The goal of the present PhD thesis was to portray the neural oscillatory basis underlying phonological difficulties in developmental dyslexia. We evaluated whether phonological deficits in developmental dyslexia are associated with impaired auditory entrainment to a specific frequency band. In that aim, we measured auditory neural synchronization to linguistic and non-linguistic auditory signals at different frequencies corresponding to key phonological units of speech (prosodic, syllabic and phonemic information). We found that dyslexic readers presented atypical neural entrainment to delta, theta and gamma frequency bands. Importantly, we showed that atypical entrainment to theta and gamma modulations in dyslexia could compromise perceptual computations during speech processing, while reduced delta entrainment in dyslexia could affect perceptual and attentional operations during speech processing. In addition, we characterized the links between the anatomy of the auditory cortex and its oscillatory responses, taking into account previous studies which have observed structural alterations in dyslexia. We observed that the cortical pruning in auditory regions was linked to a stronger sensitivity to gamma oscillation in skilled readers, but to stronger theta band sensitivity in dyslexic readers. Thus, we concluded that the left auditory regions might be specialized for processing phonological information at different time scales (phoneme vs. syllable) in skilled and dyslexic readers. Lastly, by assessing both children and adults on similar tasks, we provided the first evaluation of developmental modulations of typical and atypical auditory sampling (and their structural underpinnings). We found that atypical neural entrainment to delta, theta and gamma are present in dyslexia throughout the lifespan and is not modulated by reading experience

    Physical mechanisms may be as important as brain mechanisms in evolution of speech [Commentary on Ackerman, Hage, & Ziegler. Brain Mechanisms of acoustic communication in humans and nonhuman primates: an evolutionary perspective]

    No full text
    We present two arguments why physical adaptations for vocalization may be as important as neural adaptations. First, fine control over vocalization is not easy for physical reasons, and modern humans may be exceptional. Second, we present an example of a gorilla that shows rudimentary voluntary control over vocalization, indicating that some neural control is already shared with great apes

    Brain mechanisms of acoustic communication in humans and nonhuman primates: An evolutionary perspective

    Get PDF
    Any account of “what is special about the human brain” (Passingham 2008) must specify the neural basis of our unique ability to produce speech and delineate how these remarkable motor capabilities could have emerged in our hominin ancestors. Clinical data suggest that the basal ganglia provide a platform for the integration of primate-general mechanisms of acoustic communication with the faculty of articulate speech in humans. Furthermore, neurobiological and paleoanthropological data point at a two-stage model of the phylogenetic evolution of this crucial prerequisite of spoken language: (i) monosynaptic refinement of the projections of motor cortex to the brainstem nuclei that steer laryngeal muscles, presumably, as part of a “phylogenetic trend” associated with increasing brain size during hominin evolution; (ii) subsequent vocal-laryngeal elaboration of cortico-basal ganglia circuitries, driven by human-specific FOXP2 mutations.;>This concept implies vocal continuity of spoken language evolution at the motor level, elucidating the deep entrenchment of articulate speech into a “nonverbal matrix” (Ingold 1994), which is not accounted for by gestural-origin theories. Moreover, it provides a solution to the question for the adaptive value of the “first word” (Bickerton 2009) since even the earliest and most simple verbal utterances must have increased the versatility of vocal displays afforded by the preceding elaboration of monosynaptic corticobulbar tracts, giving rise to enhanced social cooperation and prestige. At the ontogenetic level, the proposed model assumes age-dependent interactions between the basal ganglia and their cortical targets, similar to vocal learning in some songbirds. In this view, the emergence of articulate speech builds on the “renaissance” of an ancient organizational principle and, hence, may represent an example of “evolutionary tinkering” (Jacob 1977)

    Pitch discrimination in optimal and suboptimal acoustic environments : electroencephalographic, magnetoencephalographic, and behavioral evidence

    Get PDF
    Pitch discrimination is a fundamental property of the human auditory system. Our understanding of pitch-discrimination mechanisms is important from both theoretical and clinical perspectives. The discrimination of spectrally complex sounds is crucial in the processing of music and speech. Current methods of cognitive neuroscience can track the brain processes underlying sound processing either with precise temporal (EEG and MEG) or spatial resolution (PET and fMRI). A combination of different techniques is therefore required in contemporary auditory research. One of the problems in comparing the EEG/MEG and fMRI methods, however, is the fMRI acoustic noise. In the present thesis, EEG and MEG in combination with behavioral techniques were used, first, to define the ERP correlates of automatic pitch discrimination across a wide frequency range in adults and neonates and, second, they were used to determine the effect of recorded acoustic fMRI noise on those adult ERP and ERF correlates during passive and active pitch discrimination. Pure tones and complex 3-harmonic sounds served as stimuli in the oddball and matching-to-sample paradigms. The results suggest that pitch discrimination in adults, as reflected by MMN latency, is most accurate in the 1000-2000 Hz frequency range, and that pitch discrimination is facilitated further by adding harmonics to the fundamental frequency. Newborn infants are able to discriminate a 20% frequency change in the 250-4000 Hz frequency range, whereas the discrimination of a 5% frequency change was unconfirmed. Furthermore, the effect of the fMRI gradient noise on the automatic processing of pitch change was more prominent for tones with frequencies exceeding 500 Hz, overlapping with the spectral maximum of the noise. When the fundamental frequency of the tones was lower than the spectral maximum of the noise, fMRI noise had no effect on MMN and P3a, whereas the noise delayed and suppressed N1 and exogenous N2. Noise also suppressed the N1 amplitude in a matching-to-sample working memory task. However, the task-related difference observed in the N1 component, suggesting a functional dissociation between the processing of spatial and non-spatial auditory information, was partially preserved in the noise condition. Noise hampered feature coding mechanisms more than it hampered the mechanisms of change detection, involuntary attention, and the segregation of the spatial and non-spatial domains of working-memory. The data presented in the thesis can be used to develop clinical ERP-based frequency-discrimination protocols and combined EEG and fMRI experimental paradigms.Kyky erottaa korkeat ja matalat äänet toisistaan on yksi aivojen perustoiminnoista. Ilman sitä emme voisi ymmärtää puhetta tai nauttia musiikista. Jotkut potilaat ja hyvin pienet lapset eivät pysty itse kertomaan, kuulevatko he eron vai eivät, mutta heidän aivovasteensa voivat paljastaa sen. Sävelkorkeuden erotteluun liittyvistä aivotoiminnoista ei kuitenkaan tiedetä tarpeeksi edes terveillä aikuisilla. Siksi tarvitaan lisää tämän aihepiirin tutkimusta, jossa käytetään nykyaikaisia aivotutkimusmenetelmiä, kuten tapahtumasidonnaisia herätevasteita (engl. event-related potential, ERP) ja toiminnallista magneettikuvausta (engl. functional magnetic resonance imaging, fMRI). ERP-menetelmä paljastaa, milloin aivot erottavat sävelkorkeuseron, kun taas fMRI paljastaa, mitkä aivoalueet ovat aktivoituneet tässä toiminnossa. Yhdistämällä nämä kaksi menetelmää voidaan saada kokonaisvaltaisempi kuva sävelkorkeuden erotteluun liittyvistä aivotoiminnoista. fMRI-menetelmään liittyy kuitenkin eräs ongelma, nimittäin fMRI-laitteen synnyttämä kova melu, joka voi vaikeuttaa kuuloon liittyvää tutkimusta. Tässä väitöskirjassa tutkitaan, kuinka sävelkorkeuden erottelu voidaan todeta aikuisten ja vastasyntyneiden vauvojen aivoissa ja kuinka fMRI-laitteen melu vaikuttaa kuuloärsykkeiden synnyttämiin ERP-vasteisiin. Tutkimuksen tulokset osoittavat, että aikuisen aivot voivat erottaa niinkin pieniä kuin 2,5 %:n taajuuseroja, mutta erottelu tapahtuu nopeammin n. 1000-2000 Hz:n taajuudella kuin matalammilla tai korkeammilla taajuuksilla. Vastasyntyneen vauvan aivot erottelivat vain yli 20 %:n taajuusmuutoksia. Kun taustalla soitettiin fMRI-laitteen melua, se vaimensi aivovasteita 500-2000 Hz:n äänille enemmän kuin muille äänille. Melu ei kuitenkaan vaikuttanut alle 500 Hz:n äänten synnyttämiin aivovasteisiin. Riippumatta siitä, esitettiinkö taustalla melua vai ei, äänilähteen paikan muutoksen synnyttämä ERP-vaste oli suurempi kuin äänenkorkeuden muutoksen synnyttämä vaste. Tämä väitöskirjatutkimus on osoittanut, että sävelkorkeuden erottelua voidaan tutkia tehokkaasti ERP-menetelmällä sekä aikuisilla että vauvoilla. Tulosten mukaan ERP- ja fMRI-menetelmien yhdistämistä voidaan tehostaa ottamalla kokeiden suunnittelussa huomioon fMRI-laitteen melun vaikutukset ERP-vasteisiin. Tutkimuksen aineistoa voidaan hyödyntää monimutkaisten sävelkorkeuden erottelua mittaavien kokeiden suunnittelussa mm. potilailla ja lapsilla
    corecore