26 research outputs found

    Exposure of Preadolescent Children to Nonnative Accents and its Effect on Linguistic Trajectory

    Get PDF
    Children and adults are often put presented with accents that are outside their realm of familiarity. The purpose of this study was to examine how exposure of preadolescent children to nonnative accents during their linguistic development increases their linguistic flexibility in adulthood. By examining the processes of speech intake, the stages of linguistic development, and the role of experience versus perception, the research clarifies what elements most significantly alter a listener’s ability to interpret unfamiliar speech and during what periods a person is most developmentally available for a a streamline understanding of nonnative speech. This study challenges the argument that direct exposure is the only way to understand nonnative accents and the argument that all adults have the ability to decipher unfamiliar speech. Through exposure which leads to familiarity and the development of mechanisms to isolate essential and nonessential linguistic information, listeners increase their storage of context and speaker-specific characteristics and ability to navigate nonnative speech.https://scholarscompass.vcu.edu/uresposters/1080/thumbnail.jp

    American English Speakers\u27 Perception of Non-Native Phonotactic Constraints: The Influence of Training in Phonology

    Get PDF
    The purpose of the present study was to examine the differences between perceptions of non-native phonotactic rules and constraints by monolingual English-speaking undergraduate students in a program of communication disorders who had taken and passed a course in the study of phonology and by undergraduate students in communication disorders who had not yet taken a course in phonology. Participants listened to audio recordings of words from Hindi, Hmong, Kurdish, Russian, and Swedish recorded by speakers fluent in those languages. Each of the words contained at least one phonotactic constraint that is not permitted in American English phonology. Participants were instructed to write exactly what they heard after each word in the recordings, and their perceptions of the illegal constraints were scored as correct or incorrect. No significant difference was found between the students who had taken a phonology course and the students who had not. Additionally, participants did not perform significantly better for one language over the others for either groups, but Group A performed the best for Swedish, while Group B performed the best for Russian. The most common misperception made was the omission of one phoneme when two were illegally combined. The results of this study, though not consistent with anticipated results, have many implications for issues concerning the linguistic diversity of the United States, among other issues related to language

    Processing of false belief passages during natural story comprehension: An fMRI study

    Get PDF
    The neural correlates of theory of mind (ToM) are typically studied using paradigms which require participants to draw explicit, task-related inferences (e.g., in the false belief task). In a natural setup, such as listening to stories, false belief mentalizing occurs incidentally as part of narrative processing. In our experiment, participants listened to auditorily presented stories with false belief passages (implicit false belief processing) and immediately after each story answered comprehension questions (explicit false belief processing), while neural responses were measured with functional magnetic resonance imaging (fMRI). All stories included (among other situations) one false belief condition and one closely matched control condition. For the implicit ToM processing, we modeled the hemodynamic response during the false belief passages in the story and compared it to the hemodynamic response during the closely matched control passages. For implicit mentalizing, we found activation in typical ToM processing regions, that is the angular gyrus (AG), superior medial frontal gyrus (SmFG), precuneus (PCUN), middle temporal gyrus (MTG) as well as in the inferior frontal gyrus (IFG) billaterally. For explicit ToM, we only found AG activation. The conjunction analysis highlighted the left AG and MTG as well as the bilateral IFG as overlapping ToM processing regions for both implicit and explicit modes. Implicit ToM processing during listening to false belief passages, recruits the left SmFG and billateral PCUN in addition to the “mentalizing network” known form explicit processing tasks

    Brain networks involved in accented speech processing

    Get PDF
    We investigated the neural correlates of accented speech processing (ASP) with an fMRI study that overcame prior limitations in this line of research: we preserved intelligibility by using two regional accents that differ in prosody but only mildly in phonetics (Latin American and Castilian Spanish), and we used independent component analysis to identify brain networks as opposed to isolated regions. ASP engaged a speech perception network composed primarily of structures related with the processing of prosody (cerebellum, putamen, and thalamus). This network also included anterior fronto-temporal areas associated with lexical-semantic processing and a portion of the inferior frontal gyrus linked to executive control. ASP also recruited domain-general executive control networks related with cognitive demands (dorsal attentional and default mode networks) and the processing of salient events (salience network). Finally, the reward network showed a preference for the native accent, presumably revealing people's sense of social belonging

    Brain-behavior relationships in incidental learning of non-native phonetic categories

    Get PDF
    Available online 12 September 2019.Research has implicated the left inferior frontal gyrus (LIFG) in mapping acoustic-phonetic input to sound category representations, both in native speech perception and non-native phonetic category learning. At issue is whether this sensitivity reflects access to phonetic category information per se or to explicit category labels, the latter often being required by experimental procedures. The current study employed an incidental learning paradigm designed to increase sensitivity to a difficult non-native phonetic contrast without inducing explicit awareness of the categorical nature of the stimuli. Functional MRI scans revealed frontal sensitivity to phonetic category structure both before and after learning. Additionally, individuals who succeeded most on the learning task showed the largest increases in frontal recruitment after learning. Overall, results suggest that processing novel phonetic category information entails a reliance on frontal brain regions, even in the absence of explicit category labels.This research was supported by NIH grant R01 DC013064 to EBM and NIH NIDCD Grant R01 DC006220 to SEB. The authors thank F. Sayako Earle for assistance with stimulus development; members of the Language and Brain lab for help with data collection and their feedback throughout the project; Elisa Medeiros for assistance with collection of fMRI data; Paul Taylor for assistance with neuroimaging analyses; and attendees of the 2016 Meeting of the Psychonomic Society and the 2017 Meeting of the Society for Neurobiology of Language for helpful feedback on this project. We also extend thanks to two anonymous reviewers for helpful feedback on a previous version of this manuscript

    Predicting and imagining language

    Get PDF
    To what extent is predicting language akin to imagining language? Recently, researchers have argued that covert simulation of the production system underlies both articulation imagery and predicting what somebody is about to say. Moreover, experimental evidence implicates potentially similar production-related mechanisms in prediction during language comprehension and in mental imagery tasks. We discuss evidence in favour of this proposal and argue that imagining others’ utterances can also implicate covert simulation. Finally, we briefly review evidence that speakers in joint language tasks cannot help but mentally represent (i.e., imagine) whether others are engaging in language production, and that they do so using mechanisms that are also implicated in preparing to speak

    Lower Beta: A Central Coordinator of Temporal Prediction in Multimodal Speech

    Get PDF
    How the brain decomposes and integrates information in multimodal speech perception is linked to oscillatory dynamics. However, how speech takes advantage of redundancy between different sensory modalities, and how this translates into specific oscillatory patterns remains unclear. We address the role of lower beta activity (~20 Hz), generally associated with motor functions, as an amodal central coordinator that receives bottom-up delta-theta copies from specific sensory areas and generate top-down temporal predictions for auditory entrainment. Dissociating temporal prediction from entrainment may explain how and why visual input benefits speech processing rather than adding cognitive load in multimodal speech perception. On the one hand, body movements convey prosodic and syllabic features at delta and theta rates (i.e., 1–3 Hz and 4–7 Hz). On the other hand, the natural precedence of visual input before auditory onsets may prepare the brain to anticipate and facilitate the integration of auditory delta-theta copies of the prosodic-syllabic structure. Here, we identify three fundamental criteria based on recent evidence and hypotheses, which support the notion that lower motor beta frequency may play a central and generic role in temporal prediction during speech perception. First, beta activity must respond to rhythmic stimulation across modalities. Second, beta power must respond to biological motion and speech-related movements conveying temporal information in multimodal speech processing. Third, temporal prediction may recruit a communication loop between motor and primary auditory cortices (PACs) via delta-to-beta cross-frequency coupling. We discuss evidence related to each criterion and extend these concepts to a beta-motivated framework of multimodal speech processing

    Fiber Pathways for Language in the Developing Brain: A Diffusion Tensor Imaging (DTI) Study

    Get PDF
    The present study characterized two fiber pathways important for language, the superior longitudinal fasciculus/arcuate fasciculus (SLF/AF) and the frontal aslant tract (FAT), and related these tracts to speech, language, and literacy skill in children five to eight years old. We used Diffusion Tensor Imaging (DTI) to characterize the fiber pathways and administered several language assessments. The FAT was identified for the first time in children. Results showed no age-related change in integrity of the FAT, but did show age-related change in the left (but not right) SLF/AF. Moreover, only the integrity of the right FAT was related to phonology but not audiovisual speech perception, articulation, language, or literacy. Both the left and right SLF/AF related to language measures, specifically receptive and expressive language, and language content. These findings are important for understanding the neurobiology of language in the developing brain, and can be incorporated within contemporary dorsal-ventral-motor models for language

    Functional brain outcomes of L2 speech learning emerge during sensorimotor transformation

    Get PDF
    Sensorimotor transformation (ST) may be a critical process in mapping perceived speech input onto non-native (L2) phonemes, in support of subsequent speech production. Yet, little is known concerning the role of ST with respect to L2 speech, particularly where learned L2 phones (e.g., vowels) must be produced in more complex lexical contexts (e.g., multi-syllabic words). Here, we charted the behavioral and neural outcomes of producing trained L2 vowels at word level, using a speech imitation paradigm and functional MRI. We asked whether participants would be able to faithfully imitate trained L2 vowels when they occurred in non-words of varying complexity (one or three syllables). Moreover, we related individual differences in imitation success during training to BOLD activation during ST (i.e., pre-imitation listening), and during later imitation. We predicted that superior temporal and peri-Sylvian speech regions would show increased activation as a function of item complexity and non-nativeness of vowels, during ST. We further anticipated that pre-scan acoustic learning performance would predict BOLD activation for non-native (vs. native) speech during ST and imitation. We found individual differences in imitation success for training on the non-native vowel tokens in isolation; these were preserved in a subsequent task, during imitation of mono- and trisyllabic words containing those vowels. fMRI data revealed a widespread network involved in ST, modulated by both vowel nativeness and utterance complexity: superior temporal activation increased monotonically with complexity, showing greater activation for non-native than native vowels when presented in isolation and in trisyllables, but not in monosyllables. Individual differences analyses showed that learning versus lack of improvement on the non-native vowel during pre-scan training predicted increased ST activation for non-native compared with native items, at insular cortex, pre-SMA/SMA, and cerebellum. Our results hold implications for the importance of ST as a process underlying successful imitation of non-native speech
    corecore