11 research outputs found

    Auditory temporal processing in healthy aging: a magnetoencephalographic study

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Impaired speech perception is one of the major sequelae of aging. In addition to peripheral hearing loss, central deficits of auditory processing are supposed to contribute to the deterioration of speech perception in older individuals. To test the hypothesis that auditory temporal processing is compromised in aging, auditory evoked magnetic fields were recorded during stimulation with sequences of 4 rapidly recurring speech sounds in 28 healthy individuals aged 20 – 78 years.</p> <p>Results</p> <p>The decrement of the N1m amplitude during rapid auditory stimulation was not significantly different between older and younger adults. The amplitudes of the middle-latency P1m wave and of the long-latency N1m, however, were significantly larger in older than in younger participants.</p> <p>Conclusion</p> <p>The results of the present study do not provide evidence for the hypothesis that auditory temporal processing, as measured by the decrement (short-term habituation) of the major auditory evoked component, the N1m wave, is impaired in aging. The differences between these magnetoencephalographic findings and previously published behavioral data might be explained by differences in the experimental setting between the present study and previous behavioral studies, in terms of speech rate, attention, and masking noise. Significantly larger amplitudes of the P1m and N1m waves suggest that the cortical processing of individual sounds differs between younger and older individuals. This result adds to the growing evidence that brain functions, such as sensory processing, motor control and cognitive processing, can change during healthy aging, presumably due to experience-dependent neuroplastic mechanisms.</p

    The neurochemical basis of human cortical auditory processing: combining proton magnetic resonance spectroscopy and magnetoencephalography

    Get PDF
    BACKGROUND: A combination of magnetoencephalography and proton magnetic resonance spectroscopy was used to correlate the electrophysiology of rapid auditory processing and the neurochemistry of the auditory cortex in 15 healthy adults. To assess rapid auditory processing in the left auditory cortex, the amplitude and decrement of the N1m peak, the major component of the late auditory evoked response, were measured during rapidly successive presentation of acoustic stimuli. We tested the hypothesis that: (i) the amplitude of the N1m response and (ii) its decrement during rapid stimulation are associated with the cortical neurochemistry as determined by proton magnetic resonance spectroscopy. RESULTS: Our results demonstrated a significant association between the concentrations of N-acetylaspartate, a marker of neuronal integrity, and the amplitudes of individual N1m responses. In addition, the concentrations of choline-containing compounds, representing the functional integrity of membranes, were significantly associated with N1m amplitudes. No significant association was found between the concentrations of the glutamate/glutamine pool and the amplitudes of the first N1m. No significant associations were seen between the decrement of the N1m (the relative amplitude of the second N1m peak) and the concentrations of N-acetylaspartate, choline-containing compounds, or the glutamate/glutamine pool. However, there was a trend for higher glutamate/glutamine concentrations in individuals with higher relative N1m amplitude. CONCLUSION: These results suggest that neuronal and membrane functions are important for rapid auditory processing. This investigation provides a first link between the electrophysiology, as recorded by magnetoencephalography, and the neurochemistry, as assessed by proton magnetic resonance spectroscopy, of the auditory cortex

    Sensitivity of the human auditory cortex to acoustic degradation of speech and non-speech sounds

    Get PDF
    The perception of speech is usually an effortless and reliable process even in highly adverse listening conditions. In addition to external sound sources, the intelligibility of speech can be reduced by degradation of the structure of speech signal itself, for example by digital compression of sound. This kind of distortion may be even more detrimental to speech intelligibility than external distortion, given that the auditory system will not be able to utilize sound source-specific acoustic features, such as spatial location, to separate the distortion from the speech signal. The perceptual consequences of acoustic distortions on speech intelligibility have been extensively studied. However, the cortical mechanisms of speech perception in adverse listening conditions are not well known at present, particularly in situations where the speech signal itself is distorted. The aim of this thesis was to investigate the cortical mechanisms underlying speech perception in conditions where speech is less intelligible due to external distortion or as a result of digital compression. In the studies of this thesis, the intelligibility of speech was varied either by digital compression or addition of stochastic noise. Cortical activity related to the speech stimuli was measured using magnetoencephalography (MEG). The results indicated that degradation of speech sounds by digital compression enhanced the evoked responses originating from the auditory cortex, whereas addition of stochastic noise did not modulate the cortical responses. Furthermore, it was shown that if the distortion was presented continuously in the background, the transient activity of auditory cortex was delayed. On the perceptual level, digital compression reduced the comprehensibility of speech more than additive stochastic noise. In addition, it was also demonstrated that prior knowledge of speech content enhanced the intelligibility of distorted speech substantially, and this perceptual change was associated with an increase in cortical activity within several regions adjacent to auditory cortex. In conclusion, the results of this thesis show that the auditory cortex is very sensitive to the acoustic features of the distortion, while at later processing stages, several cortical areas reflect the intelligibility of speech. These findings suggest that the auditory system rapidly adapts to the variability of the auditory environment, and can efficiently utilize previous knowledge of speech content in deciphering acoustically degraded speech signals.Puheen havaitseminen on useimmiten vaivatonta ja luotettavaa myös erittÀin huonoissa kuunteluolosuhteissa. Puheen ymmÀrrettÀvyys voi kuitenkin heikentyÀ ympÀristön hÀiriölÀhteiden lisÀksi myös silloin, kun puhesignaalin rakennetta muutetaan esimerkiksi pakkaamalla digitaalista ÀÀntÀ. TÀllainen hÀiriö voi heikentÀÀ ymmÀrrettÀvyyttÀ jopa ulkoisia hÀiriöitÀ voimakkaammin, koska kuulojÀrjestelmÀ ei pysty hyödyntÀmÀÀn ÀÀnilÀhteen ominaisuuksia, kuten ÀÀnen tulosuuntaa, hÀiriön erottelemisessa puheesta. Akustisten hÀiriöiden vaikutuksia puheen havaitsemiseen on tutkttu laajalti, mutta havaitsemiseen liittyvÀt aivomekanismit tunnetaan edelleen melko puutteelisesti etenkin tilanteissa, joissa itse puhesignaali on laadultaan heikentynyt. TÀmÀn vÀitöskirjan tavoitteena oli tutkia puheen havaitsemisen aivomekanismeja tilanteissa, joissa puhesignaali on vaikeammin ymmÀrrettÀvissÀ joko ulkoisen ÀÀnilÀhteen tai digitaalisen pakkauksen vuoksi. VÀitöskirjan neljÀssÀ osatutkimuksessa lyhyiden puheÀÀnien ja jatkuvan puheen ymmÀrrettÀvyyttÀ muokattiin joko digitaalisen pakkauksen kautta tai lisÀÀmÀllÀ puhesignaaliin satunnaiskohinaa. PuheÀrsykkeisiin liittyvÀÀ aivotoimintaa tutkittiin magnetoenkefalografia-mittauksilla. Tutkimuksissa havaittiin, ettÀ kuuloaivokuorella syntyneet herÀtevasteet voimistuivat, kun puheÀÀntÀ pakattiin digitaalisesti. Sen sijaan puheÀÀniin lisÀtty satunnaiskohina ei vaikuttanut herÀtevasteisiin. Edelleen, mikÀli puheÀÀnien taustalla esitettiin jatkuvaa hÀiriötÀ, kuuloaivokuoren aktivoituminen viivÀstyi hÀiriön intensiteetin kasvaessa. Kuuntelukokeissa havaittiin, ettÀ digitaalinen pakkaus heikentÀÀ puheÀÀnien ymmÀrrettÀvyyttÀ voimakkaammin kuin satunnaiskohina. LisÀksi osoitettiin, ettÀ aiempi tieto puheen sisÀllöstÀ paransi merkittÀvÀsti hÀiriöisen puheen ymmÀrrettÀvyyttÀ, mikÀ heijastui aivotoimintaan kuuloaivokuoren viereisillÀ aivoalueilla siten, ettÀ ymmÀrrettÀvÀ puhe aiheutti suuremman aktivaation kuin heikosti ymmÀrrettÀvÀ puhe. VÀitöskirjan tulokset osoittavat, ettÀ kuuloaivokuori on erittÀin herkkÀ puheÀÀnien akustisille hÀiriöille, ja myöhemmissÀ prosessoinnin vaiheissa useat kuuloaivokuoren viereiset aivoalueet heijastavat puheen ymmÀrrettÀvyyttÀ. Tulosten mukaan voi olettaa, ettÀ kuulojÀrjestelmÀ mukautuu nopeasti ÀÀniympÀristön vaihteluihin muun muassa hyödyntÀmÀllÀ aiempaa tietoa puheen sisÀllöstÀ tulkitessaan hÀiriöistÀ puhesignaalia

    Untersuchungen von Komponenten akustisch evozierter Potentiale an schwerhörigen Industriearbeitern

    Get PDF
    Sowohl schwerhörigen Musikern als auch schwerhörigen Arbeitern im industriellen Bereich ist es möglich berufsspezifisch falsche Töne, d.h. verstimmte Töne oder fehlerhafte MaschinengerĂ€usche, zu erkennen. Davon ausgehend verglich ich die EEG-Antworten beider Berufsgruppen. Ich analysierte die Hirnströme in Hinsicht auf akustisch evozierte Potentiale (AEP) und das Frequenzspektrum des EEG. Bei den Probanden handelte es sich um 20 Arbeitnehmer aus der Industrie (37-65 Jahre) sowie 16 Berufsmusiker (28-68 Jahre). Die beiden Gruppen wiesen eine vergleichbare Schwerhörigkeit von ca. 40 dB im Hochfrequenzbereich auf. Ich untersuchte die Reaktionen auf die oben genannten StörgerĂ€usche. Als Stimulus dienten im schallfreien Raum mit einem LautstĂ€rkepegel von 65 dB SPL applizierte nicht verstimmte hochfrequente C3-Dur-Akkorde und als Deviant dieselben Akkorde mit verstimmtem Mittelton. Die Applikation erfolgte randomisiert im Oddball-Design um die Mismatch Negativity (MMN) zu untersuchen. Die zweite Art von Stimulus war eine dreiminĂŒtige Tonspur einer Flaschenwaschanlage mit kurzen Überlagerungen von Störsignalen. Auch hier wĂ€hlte ich die Applikation mit 65 dB SPL im freien Schallfeld. Zur Aufzeichnung und Analyse des 31-Kanal-EEG verwendete ich das Brain Vision System (Brain Products GmbH, MĂŒnchen). Ich analysierte die AEP einschließlich der MMN und die Frequenzinhalte des EEGs. Die Auswertung zeigt, dass es Musikern trotz Schwerhörigkeit gelang, verstimmte Akkorde zu identifizieren. Sie erkannten die Töne subjektiv und zeigten signifikante VerĂ€nderungen in den AEPs. Industriearbeiter zeigen keine signifikanten AEP-Änderungen, allerdings beeinflussen Fehltöne im MaschinengerĂ€usch die Frequenzanalyse signifikant. Ich schließe daraus, dass Training und Lernen beim Hören eine wichtige Rolle spielt

    화성적 êž°ëŒ€ê°êłŒ ì „ëŹžì„±ìŽ ìČ­ê°í”Œì§ˆì˜ 반응에 믞ìč˜ëŠ” 영햄: 뇌자도 ì—°ê”Ź

    Get PDF
    í•™ìœ„ë…ŒëŹž (ë°•ì‚Ź)-- 서욞대학ꔐ 대학원 : 음악대학 í˜‘ë™êłŒì •ìŒì•…í•™, 2018. 2. 읎석원.however, the effect on the auditory cortex has rarely been examined. The processing of auditory stimuli depends on both afferent and efferent auditory pathways. Behavioral studies have indicated that the chords harmonically related to the preceding context are more rapidly processed than unrelated chords. P2 (the positive auditory-evoked potential at approximately 200 ms) is principally affected by musical experience, and the source of P2 is the associative auditory temporal regions, with additional contributions from the frontal area. Based on anatomical evidence of interconnections between the frontal cortex and the belt and parabelt regions in the auditory cortex, we hypothesized that musical expectations would affect neural activities in the auditory cortex via an efferent pathway. To test this hypothesis, we created five-chord progressions with the third chord manipulated (highly expected, less expected, and unexpected) and measured the auditory-evoked fields (AEFs) of seven musicians and seven non-musicians while they listened to musical stimuli. The results indicated that the highly expected chords elicited shorter N1m (negative AEF at approximately 100 ms) and P2m (a magnetic counterpart of P2) latencies and larger P2m amplitudes in the auditory cortex than the less-expected and unexpected chords. The relations between P2m amplitudes/latencies and harmonic expectations were similar between the groupshowever, the results were more remarkable for the musicians than the non-musicians. These findings suggest that auditory cortical processing is enhanced by musical knowledge and long-term training in an efferent pathway, which is reflected by shortened N1m and P2m latencies and enhanced P2m amplitudes in the auditory cortex.The present study investigated the effects of harmonic expectations and musical expertise on auditory cortical processing using magnetoencephalography (MEG). Numerous studies have demonstrated that musical experiences enhance auditory cortical processinghowever, few studies have examined the effect of harmonic expectations on auditory cortical processing. Most studies regarding auditory cortical response enhancement have investigated acoustical sound without harmonic contexts as stimuli. Studies have demonstrated that harmonic expectations are processed in the inferior frontal gyri and elicit an early right anterior negativity (ERAN)1. Introduction 1 2. Background 5 2.1. Musical Expectation 5 2.1.1. Musical Expectation and Behavioral Research 5 2.1.2. Musical Expectation and Neuroscientific Research 10 2.2. Musical Expertise and the Brain 13 2.3. Music and Auditory Cortical Responses 17 2.3.1. Auditory Cortical Responses 18 2.3.2. Auditory Cortical Responses by Acoustical Features 19 2.3.3. Enhancement of N1 and P2 by Training 21 2.4. The Efferent Pathway 25 2.5. Magnetoencephalography 29 3. Objectives and Hypothesis 31 4. Methods 32 4.1. Participants 32 4.2. Stimuli 33 4.3. Procedures 36 4.4. Magnetoencephalography Recordings 37 4.5. Data Analysis 38 4.6. Source Localization 40 5. Results 44 5.1. Auditory-Evoked Fields (AEFs) for Three Conditions at T3 (3rd Trigger) 44 5.2. Acoustical Similarity and Harmonic Expectation 51 5.2.1. P2m for Acoustical Similarity 52 5.2.2. P2m for Harmonic Expectation 56 5.3. Correlation between Auditory and Frontal Responses 57 5.4. Correlation between Training Hours and Auditory Responses 58 6. Discussion 60 6.1. Highlights of the Research 60 6.2. Implications 61 6.2.1. Shortened P2m Latencies and Increased P2m Amplitudes as a Result of Harmonic Expectations 61 6.2.2. Shortened P2m Latencies and Increased P2m Amplitudes as a Result of Musical Expertise 65 6.2.3. Reduced P2m Amplitudes by Acoustical Similarity 68 6.2.4. P2m Amplitude vs. Latency 70 6.3. Limitations and Future Directions 73 7. Conclusion 75 References 77 Abstract in Korean 97Docto

    Differenzierung reiner und verstimmter Akkorde bei hörgeschÀdigten Berufsmusikern: eine Analyse akustisch evozierter Potentiale

    Get PDF
    Musiker, die in klassischen Orchestern arbeiten, offenbaren bei audiologischen Untersuchungen nicht selten ein Audiogramm, welches eine C5-Senke aufweist, d.h. einen Hörverlust in der Region zwischen 3 und 6 kHz. Das Erkennen fehlerhafter Töne oder KlĂ€nge ist fĂŒr ihre BerufsausĂŒbung eine unbedingte Voraussetzung. Um der Frage nachzugehen, ob durch eine Minderung des Hörvermögens Prozesse im zentralen auditiven System, wie z.B. Mustererkennung oder zeitliche Analyse beeinflusst werden können, wurde untersucht, welche Korrelate des EEG bei den hörgeminderten Berufsmusikern eine Differenzierung von fehlerfreien und fehlerhaften berufsspezifischen akustischen Signalen anzeigen. Es wurden akustisch evozierte Potentiale (AEP) abgeleitet und besonderes Augenmerk auf die spĂ€ten Anteile dieser Reizantworten gelegt, da sie die kortikale Verarbeitung akustischer Reize widerspiegeln. Im Speziellen wurde die Mismatch Negativity (MMN) betrachtet, welche die unbewusste Diskriminierung zwischen physikalisch unterschiedlichen auditorischen Stimuli (Standard versus Deviant) reprĂ€sentiert. Um die MMN bei Musikern zur Diskrimination berufsspezifischer akustischer Signale des Arbeitsalltags zu untersuchen, reine und im Mittelton verstimmte C-Dur-Akkorde verschiedener Tonhöhen entwickelt und aufgenommen. Die vorliegende Arbeit untersucht AEPs bei 10 mĂ€nnlichen Berufsmusikern (28-68 Jahre), welche alle im Audiogramm eine Hörminderung im Frequenzberech ≄ 4 kHz aufweisen und unmittelbare HörbeeintrĂ€chtigungen angeben. Die Test-Stimuli waren C-Dur-DreiklĂ€nge im tieffrequenten Bereich (Grundstellung ab cÂč) und im hochfrequenten Bereich (Grundstellung ab cÂł). Es wurden 2 unterschiedliche Paradigmen der Stimulus-Abfolge verwendet. In Paradigma 1 waren die Standard-Reize reine C-Dur-DreiklĂ€nge des Klaviers und die Deviant-Reize verstimmte C-Dur-DreiklĂ€nge des Klaviers (um 20 Cent verstimmter Mittelton) im VerhĂ€ltnis 150 reine Akkorde zu 50 verstimmten Akkorden. Das Paradigma 2 bestand aus 150 verstimmten C-Dur-DreiklĂ€ngen als Standard-Stimuli und 50 reinen Akkorden als Deviant-Stimuli. Auch das Paradigma 2 wurde im tieffrequenten und hochfrequenten Bereich prĂ€sentiert. Mit der Stimulation im hochfrequenten Bereich sollte geprĂŒft werden, ob eine Differenzierung zwischen reinen und verstimmten Akkorden auch im Frequenzbereich der audiologisch nachgewiesenen Hörminderung stattfindet. Es wurden folgende Fragestellungen analysiert: - Kann man bei im Hochtonbereich hörgeminderten Musikern Unterschiede im AEP zwischen tief- und hochfrequenter Stimulation erkennen? - Zeigt die MMN eine Diskriminierung der akustischen Reize (verstimmt vs. nicht verstimmt) an? - Zeigt das AEP Defizite in der Mustererkennung bei hörgeminderten Musikern auf? Das EEG wurde dazu von 31 aktiven Elektroden abgeleitet. Den Versuchsteilnehmern wurden 4 Reihen zu je 200 Stimuli pro Testreihe (VerhĂ€ltnis Standard- zu Deviant-Stimulus 4:1, IntensitĂ€t 65 dB SPL, Interstimulusintervall 2 bis 6 s, Aufzeichnungszeit 512 ms) prĂ€sentiert. Die Abfolge reiner und verstimmter Akkorde war randomisiert, eine Testreihe von 200 Simuli dauerte etwa 12 min. Alle Daten wurden automatisiert gefiltert und gemittelt. FĂŒr die Auswertung der AEP und MMN beschrĂ€nkten wir uns in dieser Arbeit auf die Elektrode Cz. Es wurde deutlich, dass sich die spĂ€ten Komponenten der AEPs, die man auf tieffrequente Stimulation (N1 Latenz 121,55±12,13 ms, P2 Amplitude 5,03±2,17 ”V) messen konnte, signifikant von denen bei hochfrequenter Stimulation (N1 Latenz 109,36±9,81 ms, P2 Amplitude 7,74±2,78 ”V) unterschieden. Dagegen blieben sowohl die N1-Amplitude als auch die P2-Latenz unbeeinflusst. WĂ€hrend die zerebrale Tonotopie eine ErklĂ€rung fĂŒr die Latenzunterschiede von N1 darstellt, suchten wir nach einer P2-Amplituden-Erhöhung auf hochfrequente Stimuli in der Literatur vergebens. Bei Analyse der AEPs der hörgeschĂ€digten Musiker hinsichtlich der MMN ist festzustellen, dass eine MMN bei jedem Musiker deutlich nachweisbar war, ungeachtet der Stimulusfrequenz (tief- bzw. hochfrequent) und des Paradigmas (1 oder 2). Somit ist zu folgern, dass das Diskriminationsvermögen der Musiker trotz ihrer Hörminderung im Hochtonbereich intakt und selbst bei hochfrequenter Stimulation gut nachweisbar ist. Die Daten unterscheiden sich jedoch hinsichtlich des FlĂ€cheninhaltes der MMN deutlich von normal hörenden Musikern und Nichtmusikern (Messergebnisse der Dissertation von Frau M. Rohmann am Institut fĂŒr Physiologie Jena, in der dieselben Stimuli verwendet wurden). Weiterhin fiel auf, dass die MMN, welche bei Paradigma 2 (Standard = verstimmt, Deviant = rein) gemessen wurde, deutlich grĂ¶ĂŸer war als die MMN bei Paradigma 1 (Standard = rein, Deviant = verstimmt). Wir erklĂ€ren diese Beobachtung mit dem Trainingseffekt sowie der GedĂ€chtnisprĂ€gung auf berufsspezifische bzw. bekannte Stimuli (reine Akkorde), die dazu fĂŒhren können, dass das Diskriminationsvermögen in einer Serie bekannter Stimuli beeinflusst wird. Wir können mit unseren Ergebnissen belegen, dass die MMN auch bei im Hochtonbereich hörgeschĂ€digten Musikern gut nachweisbar ist, das AEP dieser Musiker insgesamt zeigt jedoch Unterschiede im Vergleich zu Kontrollgruppen auf. Inwieweit hörgeschĂ€digte Musiker demnach andere Verarbeitungsprozesse entwickelt haben (z.B. Umverteilung verarbeitender Areale oder Lokalisationswechsel), erfordert weitere und eingehendere Untersuchungen

    Corrélats neuronaux de l'expertise auditive

    Full text link
    La voix humaine constitue la partie dominante de notre environnement auditif. Non seulement les humains utilisent-ils la voix pour la parole, mais ils sont tout aussi habiles pour en extraire une multitude d’informations pertinentes sur le locuteur. Cette expertise universelle pour la voix humaine se reflĂšte dans la prĂ©sence d’aires prĂ©fĂ©rentielles Ă  celle-ci le long des sillons temporaux supĂ©rieurs. À ce jour, peu de donnĂ©es nous informent sur la nature et le dĂ©veloppement de cette rĂ©ponse sĂ©lective Ă  la voix. Dans le domaine visuel, une vaste littĂ©rature aborde une problĂ©matique semblable en ce qui a trait Ă  la perception des visages. L’étude d’experts visuels a permis de dĂ©gager les processus et rĂ©gions impliquĂ©s dans leur expertise et a dĂ©montrĂ© une forte ressemblance avec ceux utilisĂ©s pour les visages. Dans le domaine auditif, trĂšs peu d’études se sont penchĂ©es sur la comparaison entre l’expertise pour la voix et d’autres catĂ©gories auditives, alors que ces comparaisons pourraient contribuer Ă  une meilleure comprĂ©hension de la perception vocale et auditive. La prĂ©sente thĂšse a pour dessein de prĂ©ciser la spĂ©cificitĂ© des processus et rĂ©gions impliquĂ©s dans le traitement de la voix. Pour ce faire, le recrutement de diffĂ©rents types d’experts ainsi que l’utilisation de diffĂ©rentes mĂ©thodes expĂ©rimentales ont Ă©tĂ© prĂ©conisĂ©s. La premiĂšre Ă©tude a Ă©valuĂ© l’influence d’une expertise musicale sur le traitement de la voix humaine, Ă  l’aide de tĂąches comportementales de discrimination de voix et d’instruments de musique. Les rĂ©sultats ont dĂ©montrĂ© que les musiciens amateurs Ă©taient meilleurs que les non-musiciens pour discriminer des timbres d’instruments de musique mais aussi les voix humaines, suggĂ©rant une gĂ©nĂ©ralisation des apprentissages perceptifs causĂ©s par la pratique musicale. La seconde Ă©tude avait pour but de comparer les potentiels Ă©voquĂ©s auditifs liĂ©s aux chants d’oiseaux entre des ornithologues amateurs et des participants novices. L’observation d’une distribution topographique diffĂ©rente chez les ornithologues Ă  la prĂ©sentation des trois catĂ©gories sonores (voix, chants d’oiseaux, sons de l’environnement) a rendu les rĂ©sultats difficiles Ă  interprĂ©ter. Dans la troisiĂšme Ă©tude, il Ă©tait question de prĂ©ciser le rĂŽle des aires temporales de la voix dans le traitement de catĂ©gories d’expertise chez deux groupes d’experts auditifs, soit des ornithologues amateurs et des luthiers. Les donnĂ©es comportementales ont dĂ©montrĂ© une interaction entre les deux groupes d’experts et leur catĂ©gorie d’expertise respective pour des tĂąches de discrimination et de mĂ©morisation. Les rĂ©sultats obtenus en imagerie par rĂ©sonance magnĂ©tique fonctionnelle ont dĂ©montrĂ© une interaction du mĂȘme type dans le sillon temporal supĂ©rieur gauche et le gyrus cingulaire postĂ©rieur gauche. Ainsi, les aires de la voix sont impliquĂ©es dans le traitement de stimuli d’expertise dans deux groupes d’experts auditifs diffĂ©rents. Ce rĂ©sultat suggĂšre que la sĂ©lectivitĂ© Ă  la voix humaine, telle que retrouvĂ©e dans les sillons temporaux supĂ©rieurs, pourrait ĂȘtre expliquĂ©e par une exposition prolongĂ©e Ă  ces stimuli. Les donnĂ©es prĂ©sentĂ©es dĂ©montrent plusieurs similitudes comportementales et anatomo-fonctionnelles entre le traitement de la voix et d’autres catĂ©gories d’expertise. Ces aspects communs sont explicables par une organisation Ă  la fois fonctionnelle et Ă©conomique du cerveau. Par consĂ©quent, le traitement de la voix et d’autres catĂ©gories sonores se baserait sur les mĂȘmes rĂ©seaux neuronaux, sauf en cas de traitement plus poussĂ©. Cette interprĂ©tation s’avĂšre particuliĂšrement importante pour proposer une approche intĂ©grative quant Ă  la spĂ©cificitĂ© du traitement de la voix.The human voice is the most meaningful sound category of our auditory environment. Not only is the human voice the carrier of speech, but it is also used to extract a wealth of relevant information on the speaker. Voice-sensitive areas have been identified along the superior temporal sulci of normal adult listeners. Yet little data is available on the nature and development of this selective response to voice. In the visual domain, a vast literature focuses on a similar problem regarding face perception. Several studies have identified processes and regions involved in visual expertise, demonstrating a strong resemblance to those used for faces. In the auditory domain, very few studies have compared voice expertise to expertise for other sound categories. Such comparisons could contribute to a better understanding of voice perception and hearing. This thesis aims to clarify the nature of the processes and regions involved in voice perception. Different types of experts and different experimental methods were used in three separate studies. The first study assessed the influence of musical expertise on timbre voice processing, by using using behavioral voice and musical instrument discrimination tasks. The results showed that amateur musicians performed better than non-musicians in both tasks, suggesting a generalization of auditory abilities associated with musical practice. The second study compared event related potentials evoked by birdsongs in bird experts and non-expert participants. Because a different topographical distribution was observed among bird experts in all sound categories, a definitive interpretation was difficult to make. In the third study, we asked whether the voice-sensitive areas would be recruited by different categories of sounds of expertise in guitar makers, bird experts and non-experts. The behavioral data showed an interaction between the two groups of experts and their respective category of expertise for memory and discrimination tasks. The functional magnetic resonance imaging results showed an interaction of the same type in the left superior temporal sulcus and the left posterior cingulate gyrus. The results show that the voice selective areas do not exclusively process voice stimuli but could also contribute to expert-level processing of other sound categories. Therefore, cortical selectivity to human voice could be due to a prolonged exposure to voice. The data presented demonstrate several behavioral and anatomo-functional similarities between cerebral voice processing and other types of auditory expertise. These common aspects can be explained by a functional and economical brain organization. Consequently, sound processing would rely on shared neural networks unless necessary. This interpretation is particularly important to suggest an integrative approach for studying voice processing specificity

    Cortical processing of musical pitch as reflected by behavioural and electrophysiological evidence

    Get PDF
    In a musical context, the pitch of sounds is encoded according to domain-general principles not confined to music or even to audition overall but common to other perceptual and cognitive processes (such as multiple pattern encoding and feature integration), and to domain-specific and culture-specific properties related to a particular musical system only (such as the pitch steps of the Western tonal system). The studies included in this thesis shed light on the processing stages during which pitch encoding occurs on the basis of both domain-general and music-specific properties, and elucidate the putative brain mechanisms underlying pitch-related music perception. Study I showed, in subjects without formal musical education, that the pitch and timbre of multiple sounds are integrated as unified object representations in sensory memory before attentional intervention. Similarly, multiple pattern pitches are simultaneously maintained in non-musicians' sensory memory (Study II). These findings demonstrate the degree of sophistication of pitch processing at the sensory memory stage, requiring neither attention nor any special expertise of the subjects. Furthermore, music- and culture-specific properties, such as the pitch steps of the equal-tempered musical scale, are automatically discriminated in sensory memory even by subjects without formal musical education (Studies III and IV). The cognitive processing of pitch according to culture-specific musical-scale schemata hence occurs as early as at the sensory-memory stage of pitch analysis. Exposure and cortical plasticity seem to be involved in musical pitch encoding. For instance, after only one hour of laboratory training, the neural representations of pitch in the auditory cortex are altered (Study V). However, faulty brain mechanisms for attentive processing of fine-grained pitch steps lead to inborn deficits in music perception and recognition such as those encountered in congenital amusia (Study VI). These findings suggest that predispositions for exact pitch-step discrimination together with long-term exposure to music govern the acquisition of the automatized schematic knowledge of the music of a particular culture that even non-musicians possess.Musiikkia kuunnellessa ÀÀnenkorkeuden (melodian) prosessointiin osallistuvat sekĂ€ yleiset kognitiiviset prosessit ettĂ€ pelkĂ€stÀÀn tietylle kulttuurille tyypilliset musiikin kuunteluun erikoistuneet prosessit. Ensin mainittuun kategoriaan kuuluvat yleiset hahmontunnistusmekanismit sekĂ€ musikaalisten piirteiden keskinĂ€inen integraatio, jĂ€lkimmĂ€iseen esimerkiksi lĂ€nsimaisen sĂ€velasteikkojen tunnistaminen ja niiden prosessointi. TĂ€ssĂ€ vĂ€itöskirjassa tarkastellaan sekĂ€ yleisiĂ€ ettĂ€ alakohtaisia musiikin prosessointiin liittyvĂ€ mekanismeja ja tarkastellaan niiden taustalla olevia aivomekanismeja. Kuudessa erillisessĂ€ kĂ€yttĂ€ytymistĂ€ ja aivojen toimintaa mittaavassa tutkimuksessa saatiin seuraavat tulokset. Ă„Ă€nen korkeutta ja sointia koodaavat piirteet (tutkimus I) sekĂ€ lyhyitĂ€ melodioita muodostavat sĂ€velet (tutkimus II) yhdistetÀÀn toisiinsa sensorisessa mustissa jo ennen tietoista prosessointia. Myös musiikin kulttuuriset piirteet, kuten sĂ€velasteikot lĂ€nsimaisessa musiikissa, tunnistetaan automaattisesti sensorisessa muistissa ennen kuin ne tulevat tarkkaavaisuuteen (tutkimukset III ja IV). Muusikkojen lisĂ€ksi (tutkimus III) samat tulokset saatiin koehenkilöiltĂ€, joilla ei ollut musiikkiin liittyvÀÀ aikaisempaa koulutusta. Tutkimuksessa V havaittiin lisĂ€ksi, ettĂ€ ÀÀnen korkeuteen erikoistuneet neuraaliset jĂ€rjestelmĂ€t muuttuvat jo tunnin harjoittelun seurauksena, mutta toisaalta myös synnynnĂ€isillĂ€ tekijöillĂ€ on suuri vaikutus musiikin kokemiseen (tutkimus VI). YhdessĂ€ nĂ€mĂ€ tulokset pyrkivĂ€t kuvaamaan sitĂ€, miten monimutkaisia ÀÀnen prosessointiin liittyviĂ€ ominaisuuksia sensorinen muisti pystyy kĂ€sittelemÀÀn riippumatta tarkkaavaisuudesta, tietoisesta prosessoinnista, tai musiikkiin liittyvĂ€stĂ€ aikaisemmasta koulutuksesta. LisĂ€ksi tutkimus valottaa opittujen ja synnynnĂ€isten tekijöiden vaikutuksia musiikin kokemisessa

    Auditory perceptual learning in musicians and non-musicians : Event-related potential studies on rapid plasticity

    Get PDF
    Music training shapes functional and structural constructs in the brain particularly in the areas related to sound processing. The enhanced brain responses to sounds in musicians when compared to non-musicians might be explained by the intensive auditory perceptual learning that occurs during music training. Yet the relationship between musical expertise and rapid plastic changes in brain potentials during auditory perceptual learning has not been systematically studied. This was the topic of the current thesis, in conditions where participants either actively attended to the sounds or did not. The electroencephalography (EEG) and behavioral sound discrimination task results showed that the perceptual learning of complex sound patterns required active attention to the sounds even from musicians, and that the different practice styles of musicians modulated the perceptual learning of sound features. When using simple sounds, musical expertise was found to enhance the rapid plastic changes (i.e., neural learning) even when attention was directed away from listening. The rapid plasticity in musicians was found particularly in temporal lobe areas which have specialized in processing sounds. However, right frontal lobe activation, which is related to involuntary attention shifts to sound changes, did not differ between musicians and non-musicians. Behavioral discrimination accuracy for sounds was found to be at the maximum level initially in musicians, while non-musicians improved their accuracy in discerning behavioral discrimination between active conditions. Yet, the performances in standardized attention and memory tests did not differ between musicians and non-musicians. Taken together, musical expertise seems to enhance the preattentive brain responses during auditory perceptual learning.Muusikoiden aivotutkimus on osoittanut useita pitkÀkestoiseen musiikin harjoitteluun ja musiikille altistumiseen liittyviÀ aivojen rakenteellisia ja toiminnallisia muutoksia. Aiemmat tutkimukset ovat keskittyneet pitkÀlti selvittÀmÀÀn pitkÀkestoisen harjoittelun seurauksia aivoissa kun taas muusikoiden kykyÀ oppia lyhyen ajan sisÀllÀ ei ole juurikaan tutkittu. TÀssÀ vÀitöskirjassa tutkitiiin ÀÀnten erottelun oppimista rekisteröimÀllÀ sÀhköisten aivovasteiden (EEG) muutoksia muusikoilla ja ei-muusikoilla. Tutkimuksessa kÀvi ilmi, ettei muusikoiden tarvitse keskittyÀ ÀÀnten kuunteluun oppiakseen erottelemaan niitÀ. Muusikoiden ja ei-muusikoiden aivovasteiden muuttumista havainto-oppimisen aikana tutkittiin esittÀmÀllÀ heille samana pysyviÀ ÀÀniÀ, joiden seassa oli satunnaisesti poikkeavia ÀÀniÀ. Osassa tilanteista osallistujat keskittyivÀt poikkeavien ÀÀnten erotteluun ja osassa heidÀn tarkkaavaisuutensa oli suunnattu muuhun tehtÀvÀÀn kuin ÀÀnten kuunteluun. Jo 15 30 minuutin sisÀllÀ alkoi muusikoiden aivokuorella vasteet ÀÀnille laskea vaikka he eivÀt keskittyneetkÀÀn kuuntelemiseen. Aktivaatio laski erityisesti kuulonkÀsittelyyn erikoistuneilla ohimolohkon alueilla. Aktivaation pieneneminen aivokuorella liittyy yhteen havainto-oppimisen mekanismeista, eli hermostolliseen tottumiseen, jonka seurauksena hermostollista kapasiteettia ei tarvita yhtÀ paljon kuin tÀysin uutta opittaessa. Muusikoiden aivokuorella tapahtui siis nopeampaa tottumista ÀÀniin kuin ei-muusikoilla. Kun tehtÀvÀnÀ oli keskittyÀ erottelemaan poikkeavia ÀÀniÀ, muusikoiden tehtÀvÀsuoriutuminen oli alun alkaen parempi kuin ei-muusikoilla ja vain ei-muusikoilla erottelu parani. Ei-muusikoillakin siis tapahtuu ÀÀnten erottelun oppimista yhden mittauksen aikana mutta se voi vaatia ÀÀniin keskittymistÀ. Normitetuissa tarkkaavaisuus- ja muistitesteissÀ muusikot eivÀt eronneet ei-muusikoista. Muusikoiden kyky oppia erottelemaan ÀÀniÀ nopeasti ilman keskittymistÀ liittyneekin erityisesti esitietoiseen ÀÀnten kÀsittelyyn aivoissa

    MEG, PSYCHOPHYSICAL AND COMPUTATIONAL STUDIES OF LOUDNESS, TIMBRE, AND AUDIOVISUAL INTEGRATION

    Get PDF
    Natural scenes and ecological signals are inherently complex and understanding of their perception and processing is incomplete. For example, a speech signal contains not only information at various frequencies, but is also not static; the signal is concurrently modulated temporally. In addition, an auditory signal may be paired with additional sensory information, as in the case of audiovisual speech. In order to make sense of the signal, a human observer must process the information provided by low-level sensory systems and integrate it across sensory modalities and with cognitive information (e.g., object identification information, phonetic information). The observer must then create functional relationships between the signals encountered to form a coherent percept. The neuronal and cognitive mechanisms underlying this integration can be quantified in several ways: by taking physiological measurements, assessing behavioral output for a given task and modeling signal relationships. While ecological tokens are complex in a way that exceeds our current understanding, progress can be made by utilizing synthetic signals that encompass specific essential features of ecological signals. The experiments presented here cover five aspects of complex signal processing using approximations of ecological signals : (i) auditory integration of complex tones comprised of different frequencies and component power levels; (ii) audiovisual integration approximating that of human speech; (iii) behavioral measurement of signal discrimination; (iv) signal classification via simple computational analyses and (v) neuronal processing of synthesized auditory signals approximating speech tokens. To investigate neuronal processing, magnetoencephalography (MEG) is employed to assess cortical processing non-invasively. Behavioral measures are employed to evaluate observer acuity in signal discrimination and to test the limits of perceptual resolution. Computational methods are used to examine the relationships in perceptual space and physiological processing between synthetic auditory signals, using features of the signals themselves as well as biologically-motivated models of auditory representation. Together, the various methodologies and experimental paradigms advance the understanding of ecological signal analytics concerning the complex interactions in ecological signal structure
    corecore