8 research outputs found

    It's not what you say but the way that you say it: an fMRI study of differential lexical and non-lexical prosodic pitch processing

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>This study aims to identify the neural substrate involved in prosodic pitch processing. Functional magnetic resonance imaging was used to test the premise that prosody pitch processing is primarily subserved by the right cortical hemisphere.</p> <p>Two experimental paradigms were used, firstly pairs of spoken sentences, where the only variation was a single internal phrase pitch change, and secondly, a matched condition utilizing pitch changes within analogous tone-sequence phrases. This removed the potential confounder of lexical evaluation. fMRI images were obtained using these paradigms.</p> <p>Results</p> <p>Activation was significantly greater within the right frontal and temporal cortices during the tone-sequence stimuli relative to the sentence stimuli.</p> <p>Conclusion</p> <p>This study showed that pitch changes, stripped of lexical information, are mainly processed by the right cerebral hemisphere, whilst the processing of analogous, matched, lexical pitch change is preferentially left sided. These findings, showing hemispherical differentiation of processing based on stimulus complexity, are in accord with a 'task dependent' hypothesis of pitch processing.</p

    A interface lingüística-neurociência da linguagem

    Get PDF
    This study reviews the interface Linguistics-Neuroscience of Language under philosophical, theoretical and technical perspectives, taking into account two major inherent obstacles which are the Granularity Mismatch Problem and the Ontological Incommensurability Problem (Poeppel; Embick, 2005). Some possibilities to lessen the effect of these interface problems are examined, and examples of felicitous attempts to work in this interface are provided: two related to speech perception, two to lexical access and one to syntactic processing

    Kuuloaivokuoren toiminnallinen organisaatio aktiivisten kuuntelutehtävien aikana

    Get PDF
    Previous imaging studies have shown that activation in human auditory cortex (AC) is strongly modulated during active listening tasks. However, the prevalent models of AC mainly focus on the processing of stimulus-specific information and speech and do not predict such task-dependent modulation. In the present thesis, functional magnetic resonance imaging was used to measure regional activation in AC during discrimination and n-back memory tasks in order to investigate the relationship between stimulus-specific and task-dependent processing (Study I) and inter-regional connectivity during rest and active tasks (Study III). In addition, source analysis of scalp-recorded event-related potentials was carried out to study the temporal dynamics of task-dependent activation in AC (Study II). In Study I, distinct stimulus-specific activation patterns to pitch-varying and location-varying sounds were similarly observed during visual (no directed auditory attention) and auditory tasks. This is consistent with the prevalent models which presume parallel and independent “what” (e.g. pitch) and “where” processing streams. As expected, discrimination and n-back memory tasks were associated with distinct task-dependent activation patterns. These activation patterns were independent of whether subjects performed pitch or location versions of these tasks. Thus, AC activation during discrimination and n-back memory tasks cannot be explained by enhanced stimulus-specific processing (of pitch and location). Consistently, Study II showed that the task-dependent effects in AC occur relatively late (200–700 ms from stimulus onset) compared to the latency of stimulus-specific pitch processing (0–200 ms). In Study III, the organization of human AC was investigated based on functional connectivity. Connectivity-based parcellation revealed a network structure that consisted of six modules in supratemporal plane, temporal lobe, and inferior parietal lobule in both hemispheres. Multivariate pattern analysis showed that connectivity within this network structure was significantly modulated during the presentation of sounds (visual task) and auditory task performance. Together the results of this thesis show that (1) activation in human AC strongly depends on the requirements of the listening task and that task-dependent modulation is not due to enhanced stimulus-specific processing, (2) regions in inferior parietal lobule play an important role in the processing of both task-irrelevant and task-relevant auditory information in human AC, and (3) the activation patterns in human AC during the presentation of task-irrelevant and task-relevant sounds cannot be fully explained by a hierarchical model in which information is processed in two parallel processing streams.Aiemmat kuvantamistutkimukset ovat osoittaneet, että aktiiviset kuuntelutehtävät vaikuttavat voimakkaasti ihmisen kuuloaivokuoren aktivaatioon. Kuuloaivokuoren toiminnalliset mallit kuitenkin keskittyvät äänten akustisten piirteiden ja puheen käsittelyyn, eivätkä ne siten ennusta tehtäväsidonnaisia vaikutuksia. Tässä väitöskirjassa tutkittiin kuuloaivokuoren toimintaa äänten erottelu- ja n-back-muistitehtävien aikana toiminnallisella magneettikuvauksella ja herätevasterekisteröinnillä. Tutkimusten tavoitteena oli selvittää riippuvatko ärsyke- ja tehtäväsidonnaiset aktivaatiot toisistaan (Tutkimus I) sekä tutkia kuuloaivokuoren eri alueiden välistä toiminnallista konnektiivisuutta lepo- ja tehtävätilanteiden aikana (Tutkimus III). Pään pinnalta mitattujen herätevasteiden lähdemallinnuksen avulla tutkittiin kuuloaivokuoren tehtäväsidonnaisen aktivaation ajallista dynamiikkaa (Tutkimus II). Tutkimuksessa I äänen korkeuden ja tulosuunnan vaihtelu aktivoivat erillisiä kuuloaivokuoren alueita sekä näkötehtävän (ei suunnattua kuulotarkkaavaisuutta) että kuuntelutehtävien aikana. Tämä tulos on yhtenevä vallitsevien kuuloaivokuoren mallien kanssa, joissa oletetaan, että äänen korkeus ja tulosuunta käsitellään rinnakkaisissa ja toisistaan riippumattomissa mitä- (esim. äänen korkeus) ja missä-järjestelmissä. Aktiivisten kuuntelutehtävien aikana kuuloaivokuoren aktivaatiojakauma riippui odotetusti siitä, tekivätkö koehenkilöt äänten erottelu vai n-back-muistitehtävää. Tehtäväsidonnaiset aktivaatiojakaumat (erottelu- ja muistitehtävän erot) olivat kuitenkin hyvin samankaltaisia äänen korkeus- ja tulosuuntatehtävien aikana. Kuuloaivokuoren tehtäväsidonnaisia aktivaatioita äänten erottelu- ja n-back-muistitehtävien aikana ei siten voida selittää ääni-informaation käsittelyyn liittyvien aktivaatioiden voimistumisella. Tätä johtopäätöstä tukevat myös Tutkimuksen II tulokset, joiden mukaan kuuloaivokuoren tehtäväsidonnaiset aktivaatiot (n. 200–700 ms äänen alusta) havaitaan pääosin äänenkorkeustiedon käsittelyyn liittyvän aktivaation (0–200 ms) jälkeen. Tutkimuksessa III selvitettiin kuuloaivokuoren toiminnallista organisaatiota ja sen eri aluiden muodostamia verkostoja konnektiivisuusanalyysien avulla. Näissä analyyseissä havaittiin modulaarinen rakenne, jossa kuuloaivokuori ja sen lähialueiden muodostama verkosto jakaantuu kuuteen osaan (moduuliin). Toiminnallisen konnektiivisuuden muutoksia eri koetilanteissa tarkasteltiin monimuuttujakuvioanalyysillä. Tulokset osoittivat, että konnektiivisuus kuuloaivokuoren ja sen lähialueiden muodostamassa verkostossa muuttui merkitsevästi verrattuna lepotilanteeseen, kun koehenkilöille esitettiin ääniä (näkötehtävän aikana) ja kun he tekivät kuuntelutehtäviä. Väitöskirjan tutkimusten tulokset osoittavat, että (1) ihmisen kuuloaivokuoren aktivaatio riippuu voimakkaasti kuuntelutehtävän vaatimuksista, (2) päälaenlohkon alaosat osallistuvat merkittävällä tavalla kuuloaivokuoren toimintaan ääni-informaation käsittelyn aikana riippumatta siitä, ovatko äänet olennaisia vai epäolennaisia kulloisenkin tehtävän kannalta, ja (3) ihmisen kuuloaivokuoren aktivaatiota ei voida täysin selittää hierarkisella mallilla, jossa ääni-informaatiota käsitellään kahdella rinnakkaisella tiedonkäsittelyradalla

    CORTICAL REPRESENTATION OF SPEECH IN COMPLEX AUDITORY ENVIRONMENTS AND APPLICATIONS

    Get PDF
    Being able to attend and recognize speech or a particular sound in complex listening environments is a feat performed by humans effortlessly. The underlying neural mechanisms, however, remain unclear and cannot yet be emulated by artificial systems. Understanding the internal (cortical) representation of external acoustic world is a key step in deciphering the mechanisms of human auditory processing. Further, understanding neural representation of sound finds numerous applications in clinical research for psychiatric disorders with auditory processing deficits such as schizophrenia. In the first part of this dissertation, cortical activity from normal hearing human subjects is recorded, non-invasively, using magnetoencephalography in two different real-life listening scenarios. First, when natural speech is distorted by reverberation as well as stationary additive noise. Second, when the attended speech is degraded by the presence of multiple additional talkers in the background, simulating a cocktail party. Using natural speech affected by reverberation and noise, it was demonstrated that the auditory cortex maintains both distorted as well as distortion-free representations of speech. Additionally, we show that, while the neural representation of speech remained robust to additive noise in absence of reverberation, noise had detrimental effect in presence of reverberation, suggesting differential mechanisms of speech processing for additive and reverberation distortions. In the cocktail party paradigm, we demonstrated that primary like areas represent the external auditory world in terms of acoustics, whereas higher-order areas maintained an object based representation. Further, it was demonstrated that background speech streams were represented as an unsegregated auditory object. The results suggest that object based representation of auditory scene emerge in higher-order auditory cortices. In the second part of this dissertation, using electroencephalographic recordings from normal human subjects and patients suffering from schizophrenia, it was demonstrated, for the first time, that delta band steady state responses are more affected in schizophrenia patients compared with healthy individuals, contrary to the prevailing dominance of gamma band studies in literature. Further, the results from this study suggest that the inadequate ability to sustain neural responses in this low frequency range may play a vital role in auditory perceptual and cognitive deficit mechanisms in schizophrenia. Overall this dissertation furthers current understanding of cortical representation of speech in complex listening environments and how auditory representation of sounds is affected in psychiatric disorders involving aberrant auditory processing

    Memory-related cognitive modulation of human auditory cortex: Magnetoencephalography-based validation of a computational model

    Get PDF
    It is well known that cognitive functions exert task-specific modulation of the response properties of human auditory cortex. However, the underlying neuronal mechanisms are not well understood yet. In this dissertation I present a novel approach for integrating 'bottom-up' (neural network modeling) and 'top-down' (experiment) methods to study the dynamics of cortical circuits correlated to shortterm memory (STM) processing that underlie the task-specific modulation of human auditory perception during performance of the delayed-match-to-sample (DMS) task. The experimental approach measures high-density magnetoencephalography (MEG) signals from human participants to investigate the modulation of human auditory evoked responses (AER) induced by the overt processing of auditory STM during task performance. To accomplish this goal, a new signal processing method based on independent component analysis (ICA) was developed for removing artifact contamination in the MEG recordings and investigating the functional neural circuits underlying the task-specific modulation of human AER. The computational approach uses a large-scale neural network model based on the electrophysiological knowledge of the involved brain regions to simulate system-level neural dynamics related to auditory object processing and performance of the corresponding tasks. Moreover, synthetic MEG and functional magnetic resonance imaging (fMRI) signals were simulated with forward models and compared to current and previous experimental findings. Consistently, both simulation and experimental results demonstrate a DMSspecific suppressive modulation of the AER and corresponding increased connectivity between the temporal auditory and frontal cognitive regions. Overall, the integrated approach illustrates how biologically-plausible neural network models of the brain can increase our understanding of brain mechanisms and their computations at multiple levels from sensory input to behavioral output with the intermediate steps defined

    Tracking Sound Dynamics in Human Auditory Cortex: New macroscopic perspectives from MEG

    Get PDF
    Both the external world and our internal world are full of changing activities , and the question of how these two dynamic systems are linked constitutes the most intriguing and fundamental question in neuroscience and cognitive science. This study specifically investigates the processing and representation of sound dynamic information in human auditory cortex using magnetoencephalography (MEG), a non-invasive brain imaging technique whose high temporal resolution (on the order of ~1ms) makes it an appropriate tool for studying the neural correlates of dynamic auditory information. The other goal of this study is to understand the essence of the macroscopic activities reflected in non-invasive brain imaging experiments, specifically focusing on MEG. Invasive single-cell recordings in animals have yielded a large amount of information about how the brain works at a microscopic level. However, there still exist large gaps in our understanding of the relationship between the activities recorded at the microscopic level in animals and at the macroscopic level in humans, which have yet to be reconciled in terms of their different spatial scales and activities format, making a unified knowledge framework still unsuccessful. In this study, natural speech sentences and sounds containing speech-like temporal dynamic features are employed to probe the human auditory system. The recorded MEG signal is found to be well correlated with the stimulus dynamics via amplitude modulation (AM) and/or phase modulation (PM) mechanisms. Specifically, oscillations at various frequency bands are found to be the main information-carrying elements of the MEG signal, and the two major parameters of these endogenous brain rhythms, amplitude and phase, are modulated by incoming sensory stimulus dynamics, corresponding to AM and PM mechanism, to track sound dynamics. Crucially, such modulation tracking is found to be correlated with human perception and behavior. This study suggests that these two dynamic and complex systems, the external and internal worlds, systematically communicate and are coupled via modulation mechanism, leading to a reverberating flow of information embedded in oscillating waves in human cortex. The results also have implications for brain imaging studies, suggesting that these recorded macroscopic activities reflect brain state, the more close neural correlate of high-level cognitive behavior

    The working memory of argument-verb dependencies: Spatiotemporal brain dynamics during sentence processing

    No full text

    Modulation neuronaler Oszillationen durch transkranielle Wechselstromstimulation und deren Einfluss auf die Somatosensorik

    Get PDF
    Können Funktionen des somatosensorischen Systems durch transkranielle Wechselstromstimulation (engl. „transcranial alternating current stimulation“, tACS) im alpha-Band moduliert werden und welche Aussagen lassen sich daraus über die Rolle neuronaler mu-alpha-Oszillationen für die Informationsverarbeitung im somatosensorischen System treffen? Zur Beantwortung dieser Fragen wurde in einer Reihe von Experimenten der Einfluss eines identischen tACS-Protokolls auf unterschiedlich operationalisierte Ebenen somatosensorischer Funktionen untersucht. In einem ersten Schritt wurde getestet, inwiefern tACS, appliziert über somatosensorischen Arealen Einfluss auf die Amplitude mit dem Elektroenzephalogramm (EEG) gemessener somatosensorischer mu-alpha-Oszillationen haben kann. TACS appliziert mit der individuellen mu-alpha Frequenz (mu-tACS) modulierte die Amplitude dieser Oszillationen über das Ende der Stimulation hinaus, wobei die Richtung vom Kontext der spezifischen Stimulation abhängt. In einem nächsten Schritt wurde untersucht, ob modulierte mu-alpha Wellen, entsprechend der mechanistischen inhibitorischen Sicht der alpha-Oszillationen, die somatosensorische Wahrnehmung modulieren können. In einer kontinuierlichen Detektionsaufgabe zeigte sich, dass mu-tACS zu keiner tonischen jedoch einer phasischen Modulation der Wahrnehmungsschwelle führte. Durch tACS synchronisierte mu-alpha Oszillationen scheinen damit Phasen der verbesserten und der reduzierten Wahrnehmung zu erzeugen. Mithilfe von Ruhe-Messungen im funktionellen Magnetresonanztomographen wurde anschließend untersucht, ob der Informationsfluss auf Netzwerkebene durch mu-tACS moduliert werden kann. Es fand sich eine Reduktion der funktionellen Konnektivität des stimulierten linken primären somatosensorischen Kortex während der tACS-Applikation. Die Ergebnisse belegen den möglichen Nutzen von tACS zur aktiven Modulation somatosensorischer Funktionen z.B. als methodischer Zugang in der Grundlagenforschung oder auch potentiell für therapeutische und rehabilitative Zwecke oder. Weiter fanden sich Belege für die inhibitorische Funktion neuronaler mu-alpha-Oszillationen für die somatosensorische Informationsverarbeitung
    corecore