125 research outputs found

    ROI-Based Analysis of Functional Imaging Data

    Full text link
    In this technical report, we present fMRI analysis techniques that test functional hypotheses at the region of interest (ROI) level. An SPM-compatible Matlab toolbox has been developed which allows the creation of subject-specific ROI masks based on anatomical markers and the testing of functional hypotheses on the regional response using multivariate time-series analysis techniques. The combined application of subject-specific ROI definition and region-level functional analysis is shown to appropriately compensate for inter-subject anatomical variability, offering finer localization and increased sensitivity to task-related effects than standard techniques based on whole brain normalization and voxel or cluster-level functional analysis, while providing a more direct link between discrete brain region hypotheses and the statistical analyses used to test them.National Institute of Health (R29 DC02852, ROI DC02852

    Neural phase locking predicts BOLD response in human auditory cortex

    Get PDF
    Natural environments elicit both phase-locked and non-phase-locked neural responses to the stimulus in the brain. The interpretation of the BOLD signal to date has been based on an association of the non-phase-locked power of high-frequency local field potentials (LFPs), or the related spiking activity in single neurons or groups of neurons. Previous studies have not examined the prediction of the BOLD signal by phase-locked responses. We examined the relationship between the BOLD response and LFPs in the same nine human subjects from multiple corresponding points in the auditory cortex, using amplitude modulated pure tone stimuli of a duration to allow an analysis of phase locking of the sustained time period without contamination from the onset response. The results demonstrate that both phase locking at the modulation frequency and its harmonics, and the oscillatory power in gamma/high-gamma bands are required to predict the BOLD response. Biophysical models of BOLD signal generation in auditory cortex therefore require revision and the incorporation of both phase locking to rhythmic sensory stimuli and power changes in the ensemble neural activity

    Beyond production: Brain responses during speech perception in adults who stutter

    Get PDF
    AbstractDevelopmental stuttering is a speech disorder that disrupts the ability to produce speech fluently. While stuttering is typically diagnosed based on one's behavior during speech production, some models suggest that it involves more central representations of language, and thus may affect language perception as well. Here we tested the hypothesis that developmental stuttering implicates neural systems involved in language perception, in a task that manipulates comprehensibility without an overt speech production component. We used functional magnetic resonance imaging to measure blood oxygenation level dependent (BOLD) signals in adults who do and do not stutter, while they were engaged in an incidental speech perception task. We found that speech perception evokes stronger activation in adults who stutter (AWS) compared to controls, specifically in the right inferior frontal gyrus (RIFG) and in left Heschl's gyrus (LHG). Significant differences were additionally found in the lateralization of response in the inferior frontal cortex: AWS showed bilateral inferior frontal activity, while controls showed a left lateralized pattern of activation. These findings suggest that developmental stuttering is associated with an imbalanced neural network for speech processing, which is not limited to speech production, but also affects cortical responses during speech perception

    Distinct Cortical Pathways for Music and Speech Revealed by Hypothesis-Free Voxel Decomposition

    Get PDF
    The organization of human auditory cortex remains unresolved, due in part to the small stimulus sets common to fMRI studies and the overlap of neural populations within voxels. To address these challenges, we measured fMRI responses to 165 natural sounds and inferred canonical response profiles ("components") whose weighted combinations explained voxel responses throughout auditory cortex. This analysis revealed six components, each with interpretable response characteristics despite being unconstrained by prior functional hypotheses. Four components embodied selectivity for particular acoustic features (frequency, spectrotemporal modulation, pitch). Two others exhibited pronounced selectivity for music and speech, respectively, and were not explainable by standard acoustic features. Anatomically, music and speech selectivity concentrated in distinct regions of non-primary auditory cortex. However, music selectivity was weak in raw voxel responses, and its detection required a decomposition method. Voxel decomposition identifies primary dimensions of response variation across natural sounds, revealing distinct cortical pathways for music and speech.National Eye Institute (Grant EY13455

    Population receptive field estimates of human auditory cortex.

    Get PDF
    Here we describe a method for measuring tonotopic maps and estimating bandwidth for voxels in human primary auditory cortex (PAC) using a modification of the population Receptive Field (pRF) model, developed for retinotopic mapping in visual cortex by Dumoulin and Wandell (2008). The pRF method reliably estimates tonotopic maps in the presence of acoustic scanner noise, and has two advantages over phase-encoding techniques. First, the stimulus design is flexible and need not be a frequency progression, thereby reducing biases due to habituation, expectation, and estimation artifacts, as well as reducing the effects of spatio-temporal BOLD nonlinearities. Second, the pRF method can provide estimates of bandwidth as a function of frequency. We find that bandwidth estimates are narrower for voxels within the PAC than in surrounding auditory responsive regions (non-PAC)

    Neural Modeling and Imaging of the Cortical Interactions Underlying Syllable Production

    Full text link
    This paper describes a neural model of speech acquisition and production that accounts for a wide range of acoustic, kinematic, and neuroimaging data concerning the control of speech movements. The model is a neural network whose components correspond to regions of the cerebral cortex and cerebellum, including premotor, motor, auditory, and somatosensory cortical areas. Computer simulations of the model verify its ability to account for compensation to lip and jaw perturbations during speech. Specific anatomical locations of the model's components are estimated, and these estimates are used to simulate fMRI experiments of simple syllable production with and without jaw perturbations.National Institute on Deafness and Other Communication Disorders (R01 DC02852, RO1 DC01925

    Neural representations used by brain regions underlying speech production

    Full text link
    Thesis (Ph.D.)--Boston UniversitySpeech utterances are phoneme sequences but may not always be represented as such in the brain. For instance, electropalatography evidence indicates that as speaking rate increases, gestures within syllables are manipulated separately but those within consonant clusters act as one motor unit. Moreover, speech error data suggest that a syllable's phonological content is, at some stage, represented separately from its syllabic frame structure. These observations indicate that speech is neurally represented in multiple forms. This dissertation describes three studies exploring representations of speech used in different brain regions to produce speech. The first study investigated the motor units used to learn novel speech sequences. Subjects learned to produce a set of sequences with illegal consonant clusters (e.g. GVAZF) faster and more accurately than a similar novel set. Subjects then produced novel sequences that retained varying phonemic subsequences of previously learned sequences. Novel sequences were performed as quickly and accurately as learned sequences if they contained no novel consonant clusters, regardless of other phonemic content, implicating consonant clusters as important speech motor representations. The second study investigated the neural correlates of speech motor sequence learning. Functional magnetic resonance imaging (fMRI) revealed increased activity during novel sequence productions in brain regions traditionally associated with non-speech motor sequence learning - including the basal ganglia and premotor cortex - as well as regions associated with learning and updating speech motor representations based on sensory input - including the bilateral frontal operculum and left posterior superior temporal sulcus (pSTs). Behavioral learning measures correlated with increased response for novel sequences in the frontal operculum and with white matter integrity under the pSTs, implicating functional and structural connectivity of these regions in learning success

    Pitch discrimination in optimal and suboptimal acoustic environments : electroencephalographic, magnetoencephalographic, and behavioral evidence

    Get PDF
    Pitch discrimination is a fundamental property of the human auditory system. Our understanding of pitch-discrimination mechanisms is important from both theoretical and clinical perspectives. The discrimination of spectrally complex sounds is crucial in the processing of music and speech. Current methods of cognitive neuroscience can track the brain processes underlying sound processing either with precise temporal (EEG and MEG) or spatial resolution (PET and fMRI). A combination of different techniques is therefore required in contemporary auditory research. One of the problems in comparing the EEG/MEG and fMRI methods, however, is the fMRI acoustic noise. In the present thesis, EEG and MEG in combination with behavioral techniques were used, first, to define the ERP correlates of automatic pitch discrimination across a wide frequency range in adults and neonates and, second, they were used to determine the effect of recorded acoustic fMRI noise on those adult ERP and ERF correlates during passive and active pitch discrimination. Pure tones and complex 3-harmonic sounds served as stimuli in the oddball and matching-to-sample paradigms. The results suggest that pitch discrimination in adults, as reflected by MMN latency, is most accurate in the 1000-2000 Hz frequency range, and that pitch discrimination is facilitated further by adding harmonics to the fundamental frequency. Newborn infants are able to discriminate a 20% frequency change in the 250-4000 Hz frequency range, whereas the discrimination of a 5% frequency change was unconfirmed. Furthermore, the effect of the fMRI gradient noise on the automatic processing of pitch change was more prominent for tones with frequencies exceeding 500 Hz, overlapping with the spectral maximum of the noise. When the fundamental frequency of the tones was lower than the spectral maximum of the noise, fMRI noise had no effect on MMN and P3a, whereas the noise delayed and suppressed N1 and exogenous N2. Noise also suppressed the N1 amplitude in a matching-to-sample working memory task. However, the task-related difference observed in the N1 component, suggesting a functional dissociation between the processing of spatial and non-spatial auditory information, was partially preserved in the noise condition. Noise hampered feature coding mechanisms more than it hampered the mechanisms of change detection, involuntary attention, and the segregation of the spatial and non-spatial domains of working-memory. The data presented in the thesis can be used to develop clinical ERP-based frequency-discrimination protocols and combined EEG and fMRI experimental paradigms.Kyky erottaa korkeat ja matalat ÀÀnet toisistaan on yksi aivojen perustoiminnoista. Ilman sitÀ emme voisi ymmÀrtÀÀ puhetta tai nauttia musiikista. Jotkut potilaat ja hyvin pienet lapset eivÀt pysty itse kertomaan, kuulevatko he eron vai eivÀt, mutta heidÀn aivovasteensa voivat paljastaa sen. SÀvelkorkeuden erotteluun liittyvistÀ aivotoiminnoista ei kuitenkaan tiedetÀ tarpeeksi edes terveillÀ aikuisilla. Siksi tarvitaan lisÀÀ tÀmÀn aihepiirin tutkimusta, jossa kÀytetÀÀn nykyaikaisia aivotutkimusmenetelmiÀ, kuten tapahtumasidonnaisia herÀtevasteita (engl. event-related potential, ERP) ja toiminnallista magneettikuvausta (engl. functional magnetic resonance imaging, fMRI). ERP-menetelmÀ paljastaa, milloin aivot erottavat sÀvelkorkeuseron, kun taas fMRI paljastaa, mitkÀ aivoalueet ovat aktivoituneet tÀssÀ toiminnossa. YhdistÀmÀllÀ nÀmÀ kaksi menetelmÀÀ voidaan saada kokonaisvaltaisempi kuva sÀvelkorkeuden erotteluun liittyvistÀ aivotoiminnoista. fMRI-menetelmÀÀn liittyy kuitenkin erÀs ongelma, nimittÀin fMRI-laitteen synnyttÀmÀ kova melu, joka voi vaikeuttaa kuuloon liittyvÀÀ tutkimusta. TÀssÀ vÀitöskirjassa tutkitaan, kuinka sÀvelkorkeuden erottelu voidaan todeta aikuisten ja vastasyntyneiden vauvojen aivoissa ja kuinka fMRI-laitteen melu vaikuttaa kuuloÀrsykkeiden synnyttÀmiin ERP-vasteisiin. Tutkimuksen tulokset osoittavat, ettÀ aikuisen aivot voivat erottaa niinkin pieniÀ kuin 2,5 %:n taajuuseroja, mutta erottelu tapahtuu nopeammin n. 1000-2000 Hz:n taajuudella kuin matalammilla tai korkeammilla taajuuksilla. Vastasyntyneen vauvan aivot erottelivat vain yli 20 %:n taajuusmuutoksia. Kun taustalla soitettiin fMRI-laitteen melua, se vaimensi aivovasteita 500-2000 Hz:n ÀÀnille enemmÀn kuin muille ÀÀnille. Melu ei kuitenkaan vaikuttanut alle 500 Hz:n ÀÀnten synnyttÀmiin aivovasteisiin. Riippumatta siitÀ, esitettiinkö taustalla melua vai ei, ÀÀnilÀhteen paikan muutoksen synnyttÀmÀ ERP-vaste oli suurempi kuin ÀÀnenkorkeuden muutoksen synnyttÀmÀ vaste. TÀmÀ vÀitöskirjatutkimus on osoittanut, ettÀ sÀvelkorkeuden erottelua voidaan tutkia tehokkaasti ERP-menetelmÀllÀ sekÀ aikuisilla ettÀ vauvoilla. Tulosten mukaan ERP- ja fMRI-menetelmien yhdistÀmistÀ voidaan tehostaa ottamalla kokeiden suunnittelussa huomioon fMRI-laitteen melun vaikutukset ERP-vasteisiin. Tutkimuksen aineistoa voidaan hyödyntÀÀ monimutkaisten sÀvelkorkeuden erottelua mittaavien kokeiden suunnittelussa mm. potilailla ja lapsilla

    Electrophysiological evidence for an early processing of human voices

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Previous electrophysiological studies have identified a "voice specific response" (VSR) peaking around 320 ms after stimulus onset, a latency markedly longer than the 70 ms needed to discriminate living from non-living sound sources and the 150 ms to 200 ms needed for the processing of voice paralinguistic qualities. In the present study, we investigated whether an early electrophysiological difference between voice and non-voice stimuli could be observed.</p> <p>Results</p> <p>ERPs were recorded from 32 healthy volunteers who listened to 200 ms long stimuli from three sound categories - voices, bird songs and environmental sounds - whilst performing a pure-tone detection task. ERP analyses revealed voice/non-voice amplitude differences emerging as early as 164 ms post stimulus onset and peaking around 200 ms on fronto-temporal (positivity) and occipital (negativity) electrodes.</p> <p>Conclusion</p> <p>Our electrophysiological results suggest a rapid brain discrimination of sounds of voice, termed the "fronto-temporal positivity to voices" (FTPV), at latencies comparable to the well-known face-preferential N170.</p
    • 

    corecore