87 research outputs found

    Pitch discrimination in optimal and suboptimal acoustic environments : electroencephalographic, magnetoencephalographic, and behavioral evidence

    Get PDF
    Pitch discrimination is a fundamental property of the human auditory system. Our understanding of pitch-discrimination mechanisms is important from both theoretical and clinical perspectives. The discrimination of spectrally complex sounds is crucial in the processing of music and speech. Current methods of cognitive neuroscience can track the brain processes underlying sound processing either with precise temporal (EEG and MEG) or spatial resolution (PET and fMRI). A combination of different techniques is therefore required in contemporary auditory research. One of the problems in comparing the EEG/MEG and fMRI methods, however, is the fMRI acoustic noise. In the present thesis, EEG and MEG in combination with behavioral techniques were used, first, to define the ERP correlates of automatic pitch discrimination across a wide frequency range in adults and neonates and, second, they were used to determine the effect of recorded acoustic fMRI noise on those adult ERP and ERF correlates during passive and active pitch discrimination. Pure tones and complex 3-harmonic sounds served as stimuli in the oddball and matching-to-sample paradigms. The results suggest that pitch discrimination in adults, as reflected by MMN latency, is most accurate in the 1000-2000 Hz frequency range, and that pitch discrimination is facilitated further by adding harmonics to the fundamental frequency. Newborn infants are able to discriminate a 20% frequency change in the 250-4000 Hz frequency range, whereas the discrimination of a 5% frequency change was unconfirmed. Furthermore, the effect of the fMRI gradient noise on the automatic processing of pitch change was more prominent for tones with frequencies exceeding 500 Hz, overlapping with the spectral maximum of the noise. When the fundamental frequency of the tones was lower than the spectral maximum of the noise, fMRI noise had no effect on MMN and P3a, whereas the noise delayed and suppressed N1 and exogenous N2. Noise also suppressed the N1 amplitude in a matching-to-sample working memory task. However, the task-related difference observed in the N1 component, suggesting a functional dissociation between the processing of spatial and non-spatial auditory information, was partially preserved in the noise condition. Noise hampered feature coding mechanisms more than it hampered the mechanisms of change detection, involuntary attention, and the segregation of the spatial and non-spatial domains of working-memory. The data presented in the thesis can be used to develop clinical ERP-based frequency-discrimination protocols and combined EEG and fMRI experimental paradigms.Kyky erottaa korkeat ja matalat äänet toisistaan on yksi aivojen perustoiminnoista. Ilman sitä emme voisi ymmärtää puhetta tai nauttia musiikista. Jotkut potilaat ja hyvin pienet lapset eivät pysty itse kertomaan, kuulevatko he eron vai eivät, mutta heidän aivovasteensa voivat paljastaa sen. Sävelkorkeuden erotteluun liittyvistä aivotoiminnoista ei kuitenkaan tiedetä tarpeeksi edes terveillä aikuisilla. Siksi tarvitaan lisää tämän aihepiirin tutkimusta, jossa käytetään nykyaikaisia aivotutkimusmenetelmiä, kuten tapahtumasidonnaisia herätevasteita (engl. event-related potential, ERP) ja toiminnallista magneettikuvausta (engl. functional magnetic resonance imaging, fMRI). ERP-menetelmä paljastaa, milloin aivot erottavat sävelkorkeuseron, kun taas fMRI paljastaa, mitkä aivoalueet ovat aktivoituneet tässä toiminnossa. Yhdistämällä nämä kaksi menetelmää voidaan saada kokonaisvaltaisempi kuva sävelkorkeuden erotteluun liittyvistä aivotoiminnoista. fMRI-menetelmään liittyy kuitenkin eräs ongelma, nimittäin fMRI-laitteen synnyttämä kova melu, joka voi vaikeuttaa kuuloon liittyvää tutkimusta. Tässä väitöskirjassa tutkitaan, kuinka sävelkorkeuden erottelu voidaan todeta aikuisten ja vastasyntyneiden vauvojen aivoissa ja kuinka fMRI-laitteen melu vaikuttaa kuuloärsykkeiden synnyttämiin ERP-vasteisiin. Tutkimuksen tulokset osoittavat, että aikuisen aivot voivat erottaa niinkin pieniä kuin 2,5 %:n taajuuseroja, mutta erottelu tapahtuu nopeammin n. 1000-2000 Hz:n taajuudella kuin matalammilla tai korkeammilla taajuuksilla. Vastasyntyneen vauvan aivot erottelivat vain yli 20 %:n taajuusmuutoksia. Kun taustalla soitettiin fMRI-laitteen melua, se vaimensi aivovasteita 500-2000 Hz:n äänille enemmän kuin muille äänille. Melu ei kuitenkaan vaikuttanut alle 500 Hz:n äänten synnyttämiin aivovasteisiin. Riippumatta siitä, esitettiinkö taustalla melua vai ei, äänilähteen paikan muutoksen synnyttämä ERP-vaste oli suurempi kuin äänenkorkeuden muutoksen synnyttämä vaste. Tämä väitöskirjatutkimus on osoittanut, että sävelkorkeuden erottelua voidaan tutkia tehokkaasti ERP-menetelmällä sekä aikuisilla että vauvoilla. Tulosten mukaan ERP- ja fMRI-menetelmien yhdistämistä voidaan tehostaa ottamalla kokeiden suunnittelussa huomioon fMRI-laitteen melun vaikutukset ERP-vasteisiin. Tutkimuksen aineistoa voidaan hyödyntää monimutkaisten sävelkorkeuden erottelua mittaavien kokeiden suunnittelussa mm. potilailla ja lapsilla

    Hierarchical Organization in Auditory Cortex of the Cat Using High-Field Functional Magnetic Resonance Imaging

    Get PDF
    Sensory localization within cortex is a widely accepted and documented principle. Within cortices dedicated to specific sensory information there is further organization. For example, in visual cortices a more detailed functional division and hierarchical organization has been recorded in detail. This organization starts with areas dedicated to analysis of simple visual stimuli. Areas higher in the organization are specialized for processing of progressively more complex stimuli. A similar hierarchical organization has been proposed within auditory cortex and a wealth of evidence supports this hypothesis. In the cat, the initial processing of simple auditory stimuli, such as pure tones, has been well documented in primary auditory cortex (A1) which is also the recipient of the largest projection from the thalamus. This indicates that at least the initial stages of a hierarchy exist within auditory cortex. Until now it has been difficult to investigate the remaining hierarchy in its entirety because of methodological limitations. In the present set of investigations the use of functional magnetic resonance imaging (fMRI) facilitated the investigation of auditory cortex of the cat in its entirety. Results from these investigations support the proposed hierarchy in auditory cortex in the cat with lower cortical areas selectively responding to more simple stimuli while higher areas are progressively more responsive to complex stimuli

    Neuroimaging paradigms for tonotopic mapping (II): the influence of acquisition protocol.

    Get PDF
    AbstractNumerous studies on the tonotopic organisation of auditory cortex in humans have employed a wide range of neuroimaging protocols to assess cortical frequency tuning. In the present functional magnetic resonance imaging (fMRI) study, we made a systematic comparison between acquisition protocols with variable levels of interference from acoustic scanner noise. Using sweep stimuli to evoke travelling waves of activation, we measured sound-evoked response signals using sparse, clustered, and continuous imaging protocols that were characterised by inter-scan intervals of 8.8, 2.2, or 0.0s, respectively. With regard to sensitivity to sound-evoked activation, the sparse and clustered protocols performed similarly, and both detected more activation than the continuous method. Qualitatively, tonotopic maps in activated areas proved highly similar, in the sense that the overall pattern of tonotopic gradients was reproducible across all three protocols. However, quantitatively, we observed substantial reductions in response amplitudes to moderately low stimulus frequencies that coincided with regions of strong energy in the scanner noise spectrum for the clustered and continuous protocols compared to the sparse protocol. At the same time, extreme frequencies became over-represented for these two protocols, and high best frequencies became relatively more abundant. Our results indicate that although all three scanning protocols are suitable to determine the layout of tonotopic fields, an exact quantitative assessment of the representation of various sound frequencies is substantially confounded by the presence of scanner noise. In addition, we noticed anomalous signal dynamics in response to our travelling wave paradigm that suggest that the assessment of frequency-dependent tuning is non-trivially influenced by time-dependent (hemo)dynamics when using sweep stimuli

    On-line plasticity in spoken sentence comprehension: Adapting to time-compressed speech

    Get PDF
    Listeners show remarkable flexibility in Processing variation in speech signal. One striking example is the ease with which they adapt to novel speech distortions such as listening to someone with a foreign accent. Behavioural studies suggest that significant improvements in comprehension Occur rapidly - often within 10-20 sentences. In the present experiment, we investigate the neural changes underlying on-line adaptation to distorted speech using time-compressed speech. Listeners performed a sentence verification task on normal-speed and time-compressed sentences while their neural responses were recorded using fMRI. The results showed that rapid learning of the time-compressed speech occurred during presentation of the first block of 16 sentences and was associated with increased activation in left and right auditory association cortices and in left ventral premotor Cortex. These findings suggest that the ability to adapt to a distorted speech signal may, in part, rely on mapping novel acoustic patterns onto existing articulatory motor plans, consistent with the idea that speech perception involves integrating multi-modal information including auditory and motoric cues. (C) 2009 Elsevier Inc. All rights reserved

    Force Amplitude Modulation of Tongue and Hand Movements

    Get PDF
    Rapid, precise movements of the hand and tongue are necessary to complete a wide range of tasks in everyday life. However, the understanding of normal neural control of force production is limited, particularly for the tongue. Functional neuroimaging studies of incremental hand pressure production in healthy adults revealed scaled activations in the basal ganglia, but no imaging studies of tongue force regulation have been reported. The purposes of this study were (1) to identify the neural substrates controlling tongue force for speech and nonspeech tasks, (2) to determine which activations scaled to the magnitude of force produced, and (3) to assess whether positional modifications influenced maximum pressures and accuracy of pressure target matching for hand and tongue movements. Healthy older adults compressed small plastic bulbs in the oral cavity (for speech and nonspeech tasks) and in the hand at specified fractions of maximum voluntary contraction while magnetic resonance images were acquired. Volume of interest analysis at individual and group levels outlined a network of neural substrates controlling tongue speech and nonspeech movements. Repeated measures analysis revealed differences in percentage signal change and activation volume across task and effort level in some brain regions. Actual pressures and the accuracy of pressure matching were influenced by effort level in all tasks and body position in the hand squeeze task. The current results can serve as a basis of comparison for tongue movement control in individuals with neurological disease. Group differences in motor control mechanisms may help explain differential response of limb and tongue movements to medical interventions (as occurs in Parkinson disease) and ultimately may lead to more focused intervention for dysarthria in several conditions such as PD

    Comprehending auditory speech:previous and potential contributions of functional MRI

    Get PDF
    Functional neuroimaging revolutionised the study of human language in the late twentieth century, allowing researchers to investigate its underlying cognitive processes in the intact brain. Here, we review how functional MRI (fMRI) in particular has contributed to our understanding of speech comprehension, with a focus on studies of intelligibility. We highlight the use of carefully controlled acoustic stimuli to reveal the underlying hierarchical organisation of speech processing systems and cortical (a)symmetries, and discuss the contributions of novel design and analysis techniques to the contextualisation of perisylvian regions within wider speech processing networks. Within this, we outline the methodological challenges of fMRI as a technique for investigating speech and describe the innovations that have overcome or mitigated these difficulties. Focussing on multivariate approaches to fMRI, we highlight how these techniques have allowed both local neural representations and broader scale brain systems to be described

    You took the words right out of my mouth:Dual-fMRI reveals intra- and inter-personal neural processes supporting verbal interaction.

    Get PDF
    Verbal communication relies heavily upon mutual understanding, or common ground. Inferring the intentional states of our interaction partners is crucial in achieving this, and social neuroscience has begun elucidating the intra- and inter-personal neural processes supporting such inferences. Typically, however, neuroscientific paradigms lack the reciprocal to-and-fro characteristic of social communication, offering little insight into the way these processes operate online during real-world interaction. In the present study, we overcame this by developing a “hyperscanning” paradigm in which pairs of interactants could communicate verbally with one another in a joint-action task whilst both undergoing functional magnetic resonance imaging simultaneously. Successful performance on this task required both interlocutors to predict their partner's upcoming utterance in order to converge on the same word as each other over recursive exchanges, based only on one another's prior verbal expressions. By applying various levels of analysis to behavioural and neuroimaging data acquired from 20 dyads, three principal findings emerged: First, interlocutors converged frequently within the same semantic space, suggesting that mutual understanding had been established. Second, assessing the brain responses of each interlocutor as they planned their upcoming utterances on the basis of their co-player's previous word revealed the engagement of the temporo-parietal junctional (TPJ), precuneus and dorso-lateral pre-frontal cortex. Moreover, responses in the precuneus were modulated positively by the degree of semantic convergence achieved on each round. Second, effective connectivity among these regions indicates the crucial role of the right TPJ in this process, consistent with the Nexus model. Third, neural signals within certain nodes of this network became aligned between interacting interlocutors. We suggest this reflects an interpersonal neural process through which interactants infer and align to one another's intentional states whilst they establish a common ground

    Evaluation of acoustic noise in magnetic resonance imaging

    Get PDF
    Magnetic resonance imaging (MRI) is a technique in which strong static and dynamic magnetic fields are used to create virtual slices of the human body. The process of MR imaging is associated with several health and safety issues which may negatively affect patient and radiological health workers. Potentially hazardous are biological effects of both the static and dynamic magnetic fields, the torques of the magnetic fields acting on ferromagnetic objects, thermal effects, and the negative effects of high acoustic sound pressures. The subject of this dissertation is the evaluation and modification of acoustic noise generated during MRI
    corecore