3,749 research outputs found

    Towards a Revised Motor Theory of L2 Speech Perception

    Get PDF

    Embodied & Situated Language Processing

    Get PDF

    Neural overlap of L1 and L2 semantic representations across visual and auditory modalities : a decoding approach/

    Get PDF
    This study investigated whether brain activity in Dutch-French bilinguals during semantic access to concepts from one language could be used to predict neural activation during access to the same concepts from another language, in different language modalities/tasks. This was tested using multi-voxel pattern analysis (MVPA), within and across language comprehension (word listening and word reading) and production (picture naming). It was possible to identify the picture or word named, read or heard in one language (e.g. maan, meaning moon) based on the brain activity in a distributed bilateral brain network while, respectively, naming, reading or listening to the picture or word in the other language (e.g. lune). The brain regions identified differed across tasks. During picture naming, brain activation in the occipital and temporal regions allowed concepts to be predicted across languages. During word listening and word reading, across-language predictions were observed in the rolandic operculum and several motor-related areas (pre- and postcentral, the cerebellum). In addition, across-language predictions during reading were identified in regions typically associated with semantic processing (left inferior frontal, middle temporal cortex, right cerebellum and precuneus) and visual processing (inferior and middle occipital regions and calcarine sulcus). Furthermore, across modalities and languages, the left lingual gyrus showed semantic overlap across production and word reading. These findings support the idea of at least partially language- and modality-independent semantic neural representations

    The interrelation between the perception and production of English monophthongs by speakers of Iraqi Arabic

    Get PDF
    The assumption that performance in second language (L2) speech perception and speech production is aligned has received much debate in L2 research. Theoretical models such as the Motor Theory (MT) and Speech Learning Model (SLM) have described the relation between these two processes based on the assumption that speech is perceived with reference to how it is produced and speech production is in turn influenced by how well speech contrast is perceptible to the second-language learner. The present study aims to investigate this relation with regard to Iraqi learners' perception and production of English vowels, focussing on the role of L1 interference and English proficiency level in shaping this relation. The results of the present study showed that accurate perception may not necessarily be a prerequisite for accurate production especially for EFL learners at the elementary level. Perception and production score means were significantly different, revealing an asymmetrical relation between the two processes. The results showed that speech production of L2 learners at the elementary level exceeded their ability in speech perception. However, for the other three proficiency levels, perception and production seemed to develop in synchrony. The level of difficulty encountered in the perception and production tasks could be attributed to L1 interference, since the vowels that were better produced than perceived are all found in the L1 vowel system, while the only vowel that was better perceived is not in the L1 vowel system

    Singing and Pronunciation: A Review of the Literature

    Get PDF
    Observed differences exist in the pronunciation abilities of individual language learners, especially adult learners. Musical ability and experience are possible factors that have been attributed to language pronunciation abilities. Although there has been a large amount of research concerning the effects of general musical ability and training on language abilities, very few studies have investigated the musical sub-category of singing. Research on the use of songs in the language classroom has largely tested the effects of song on vocabulary acquisition, while very few studies have explored the effects of song on pronunciation. Given that singing and pronunciation both use similar productive systems, the relationship between singing and pronunciation merits investigation. This review looks critically at the current research on singing and pronunciation abilities. Evidence from the current research shows that both singers and instrumental musicians perform better than non-musicians on language imitation tasks, and in some cases higher singing ability has a stronger effect on pronunciation performance than musicality alone. There is also evidence that singing and songs support sound memory and the verbatim recall of words when associated with simple melodies. The studies also indicate that working memory plays a large role in pronunciation performance, but this may be due to the studies’ experimental setups which use working memory heavy tasks. Rhythmic perception abilities and the use of distinct pitches for syllables may contribute to better word segmentation. Researchers’ conclusions concerning the relationship between singing and pronunciation abilities address the multi-dimensional nature of pronunciation ability, similarities between song and infant-directed language input, and the neurological overlap of language, music, singing, and memory. The limitations of current research are that most of the studies relied on languages unfamiliar to subjects to test pronunciation, which could disproportionately represent the importance of working memory as a factor in pronunciation. Research on the benefits of song on pronunciation is promising, but because the current pool of research on singing and pronunciation is very limited, more research is needed

    Age of second language acquisition affects nonverbal conflict processing in children : an fMRI study

    Get PDF
    Background: In their daily communication, bilinguals switch between two languages, a process that involves the selection of a target language and minimization of interference from a nontarget language. Previous studies have uncovered the neural structure in bilinguals and the activation patterns associated with performing verbal conflict tasks. One question that remains, however is whether this extra verbal switching affects brain function during nonverbal conflict tasks. Methods: In this study, we have used fMRI to investigate the impact of bilingualism in children performing two nonverbal tasks involving stimulus-stimulus and stimulus-response conflicts. Three groups of 8-11-year-old children - bilinguals from birth (2L1), second language learners (L2L), and a control group of monolinguals (1L1) - were scanned while performing a color Simon and a numerical Stroop task. Reaction times and accuracy were logged. Results: Compared to monolingual controls, bilingual children showed higher behavioral congruency effect of these tasks, which is matched by the recruitment of brain regions that are generally used in general cognitive control, language processing or to solve language conflict situations in bilinguals (caudate nucleus, posterior cingulate gyrus, STG, precuneus). Further, the activation of these areas was found to be higher in 2L1 compared to L2L. Conclusion: The coupling of longer reaction times to the recruitment of extra language-related brain areas supports the hypothesis that when dealing with language conflicts the specialization of bilinguals hampers the way they can process with nonverbal conflicts, at least at early stages in life

    Restructuring multimodal corrective feedback through Augmented Reality (AR)-enabled videoconferencing in L2 pronunciation teaching

    Get PDF
    The problem of cognitive overload is particularly pertinent in multimedia L2 classroom corrective feedback (CF), which involves rich communicative tools to help the class to notice the mismatch between the target input and learners’ pronunciation. Based on multimedia design principles, this study developed a new multimodal CF model through augmented reality (AR)-enabled videoconferencing to eliminate extraneous cognitive load and guide learners’ attention to the essential material. Using a quasi-experimental design, this study aims to examine the effectiveness of this new CF model in improving Chinese L2 students’ segmental production and identification of the targeted English consonants (dark /ɫ/, /ð/and /θ/), as well as their attitudes towards this application. Results indicated that the online multimodal CF environment equipped with AR annotation and filters played a significant role in improving the participants’ production of the target segments. However, this advantage was not found in the auditory identification tests compared to the offline CF multimedia class. In addition, the learners reported that the new CF model helped to direct their attention to the articulatory gestures of the student being corrected, and enhance the class efficiency. Implications for computer-assisted pronunciation training and the construction of online/offline multimedia learning environments are also discussed
    corecore