87 research outputs found
Dishabituation of the BOLD response to speech sounds
BACKGROUND: Neural systems show habituation responses at multiple levels, including relatively abstract language categories. Dishabituation â responses to non-habituated stimuli â can provide a window into the structure of these categories, without requiring an overt task. METHODS: We used an event-related fMRI design with short interval habituation trials, in which trains of stimuli were presented passively during 1.5 second intervals of relative silence between clustered scans. Trains of four identical stimuli (standard trials) and trains of three identical stimuli followed by a stimulus from a different phonetic category (deviant trials) were presented. This paradigm allowed us to measure and compare the time course of overall responses to speech, and responses to phonetic change. RESULTS: Comparisons between responses to speech and silence revealed strong responses throughout the extent of superior temporal gyrus (STG) bilaterally. Comparisons between deviant and standard trials revealed dishabituation responses in a restricted region of left posterior STG, near the border with supramarginal gyrus (SMG). Novelty responses to deviant trials were also observed in right frontal regions and hippocampus. CONCLUSION: A passive, dishabituation paradigm provides results similar to studies requiring overt responses. This paradigm can readily be extended for the study of pre-attentive processing of speech in populations such as children and second-language learners whose overt behavior is often difficult to interpret because of ancillary task demands
Recommended from our members
Grammatical Bracketing Determines Learning of Non-adjacent Dependencies
Grammatical dependencies often involve elements that
are not adjacent. However, most experiments in which
non-adjacent dependencies are learned bracketed the
dependent material with pauses, which is not how
dependencies appear in natural language. Here we
report successful learning of embedded NAD without
pause bracketing. Instead, we induce learners to
compute structure in an artificial language by entraining
them through processing English sentences. We also
found that learning becomes difficult when grammatical
entrainment causes learners to compute boundaries that
are misaligned with NAD structures. In sum, we
demonstrated that grammatical entrainment can induce
boundaries that can carry over to reveal structures in
novel language materials, and this effect can be used to
induce learning of non-adjacent dependencies
Recommended from our members
Learning Non-Adjacent Dependencies in Continuous Presentation of an Artificial Language
Many grammatical dependencies in natural language
involve elements that are not adjacent, such as between
the subject and verb in the child always runs. To date,
most experiments showing evidence of learning non-
adjacent dependencies have used artificial languages in
which the to-be-learned dependencies are presented in
isolation by presenting the minimal sequences that
contain the dependent elements. However,
dependencies in natural language are not typically
isolated in this way. In this study we exposed learners
to non-adjacent dependencies in long sequences of
words. We accelerated the speed of presentation and
learners showed evidence for learning of non-adjacent
dependencies. The previous pause-based positional
mechanisms for learning of non-adjacent dependency
are challenged
Auditory Selective Attention to Speech Modulates Activity in the Visual Word Form Area
Selective attention to speech versus nonspeech signals in complex auditory input could produce top-down modulation of cortical regions previously linked to perception of spoken, and even visual, words. To isolate such top-down attentional effects, we contrasted 2 equally challenging active listening tasks, performed on the same complex auditory stimuli (words overlaid with a series of 3 tones). Instructions required selectively attending to either the speech signals (in service of rhyme judgment) or the melodic signals (tone-triplet matching). Selective attention to speech, relative to attention to melody, was associated with blood oxygenation level-dependent (BOLD) increases during functional magnetic resonance imaging (fMRI) in left inferior frontal gyrus, temporal regions, and the visual word form area (VWFA). Further investigation of the activity in visual regions revealed overall deactivation relative to baseline rest for both attention conditions. Topographic analysis demonstrated that while attending to melody drove deactivation equivalently across all fusiform regions of interest examined, attending to speech produced a regionally specific modulation: deactivation of all fusiform regions, except the VWFA. Results indicate that selective attention to speech can topographically tune extrastriate cortex, leading to increased activity in VWFA relative to surrounding regions, in line with the well-established connectivity between areas related to spoken and visual word perception in skilled reader
Stress Priming in Reading and the Selective Modulation of Lexical and Sub-Lexical Pathways
Four experiments employed a priming methodology to investigate different mechanisms of stress assignment and how they are modulated by lexical and sub-lexical mechanisms in reading aloud in Italian. Lexical stress is unpredictable in Italian, and requires lexical look-up. The most frequent stress pattern (Dominant) is on the penultimate syllable [laVOro (work)], while stress on the antepenultimate syllable [MAcchina (car)] is relatively less frequent (non-Dominant). Word and pseudoword naming responses primed by words with non-dominant stress â which require whole-word knowledge to be read correctly â were compared to those primed by nonwords. Percentage of errors to words and percentage of dominant stress responses to nonwords were measured. In Experiments 1 and 2 stress errors increased for non-dominant stress words primed by nonwords, as compared to when they were primed by words. The results could be attributed to greater activation of sub-lexical codes, and an associated tendency to assign the dominant stress pattern by default in the nonword prime condition. Alternatively, they may have been the consequence of prosodic priming, inducing more errors on trials in which the stress pattern of primes and targets was not congruent. The two interpretations were investigated in Experiments 3 and 4. The results overall suggested a limited role of the default metrical pattern in word pronunciation, and showed clear effect of prosodic priming, but only when the sub-lexical mechanism prevailed
Auditory Selective Attention to Speech Modulates Activity in the Visual Word Form Area
Selective attention to speech versus nonspeech signals in complex auditory input could produce top-down modulation of cortical regions previously linked to perception of spoken, and even visual, words. To isolate such top-down attentional effects, we contrasted 2 equally challenging active listening tasks, performed on the same complex auditory stimuli (words overlaid with a series of 3 tones). Instructions required selectively attending to either the speech signals (in service of rhyme judgment) or the melodic signals (tone-triplet matching). Selective attention to speech, relative to attention to melody, was associated with blood oxygenation levelâdependent (BOLD) increases during functional magnetic resonance imaging (fMRI) in left inferior frontal gyrus, temporal regions, and the visual word form area (VWFA). Further investigation of the activity in visual regions revealed overall deactivation relative to baseline rest for both attention conditions. Topographic analysis demonstrated that while attending to melody drove deactivation equivalently across all fusiform regions of interest examined, attending to speech produced a regionally specific modulation: deactivation of all fusiform regions, except the VWFA. Results indicate that selective attention to speech can topographically tune extrastriate cortex, leading to increased activity in VWFA relative to surrounding regions, in line with the well-established connectivity between areas related to spoken and visual word perception in skilled readers
Ultralight vector dark matter search using data from the KAGRA O3GK run
Among the various candidates for dark matter (DM), ultralight vector DM can be probed by laser interferometric gravitational wave detectors through the measurement of oscillating length changes in the arm cavities. In this context, KAGRA has a unique feature due to differing compositions of its mirrors, enhancing the signal of vector DM in the length change in the auxiliary channels. Here we present the result of a search for U(1)BâL gauge boson DM using the KAGRA data from auxiliary length channels during the first joint observation run together with GEO600. By applying our search pipeline, which takes into account the stochastic nature of ultralight DM, upper bounds on the coupling strength between the U(1)BâL gauge boson and ordinary matter are obtained for a range of DM masses. While our constraints are less stringent than those derived from previous experiments, this study demonstrates the applicability of our method to the lower-mass vector DM search, which is made difficult in this measurement by the short observation time compared to the auto-correlation time scale of DM
Recommended from our members
Dynamics of speech sound categorization for continua defined by spectral vs. duration contrast
Recommended from our members
Statistical learning of auditory patterns as trajectories through a perceptually defined similarity space
Most accounts of statistical learning (e.g., Saffran et al., 1996) assume that the learner computes cooccurrence
statistics over units, such as syllables, that abstract away from the physical features of the input. This assumption need not hold
when the units are underlearned, unfamiliar, or uncategorizable. We tested statistical learning with variable, unfamiliar units
and show that if the featural variation is small, adults treat words differently from part-words and non-words, as if learning were
occurring over abstract units. When the featural variation is large, and the categorical boundaries are unclear, participants can
still learn statistical regularities defined over trajectories through this space: Words, part words and even non-words that follow
trajectories consistent with the familiarization set are rated as equally familiar, whereas non-words that take trajectories opposite
the familiarized direction are rated as less familiar. Conceiving of statistical learning over trajectories through perceptual space
explains the results under both conditions
- âŠ