565 research outputs found

    Perception of Fa by non-native listeners in a study abroad context

    Get PDF
    The present study aims at exploring the under-investigated interface between SA and L2 phonological development by assessing the impact of a 3-month SA programme on the pronunciation of a group of 23 Catalan/Spanish learners of English (NNSs) by means of phonetic measures and perceived FA measures. 6 native speakers (NS) in an exchange programme in Spain provided baseline data for comparison purposes. The participants were recorded performing a reading aloud task before (pre-test) and immediately after (post-test) the SA. Another group of 37 proficient non-native listeners, also bilingual in Catalan/Spanish and trained in English phonetics, assessed the NNS' speech samples for degree of FA. Phonetic measures consisted of pronunciation accuracy scores computed by counting pronunciation errors (phonemic deletions, insertions and substitutions, and stress misplacement). Measures of perceived FA were obtained with two experiments. In experiment 1, the listeners heard a random presentation of the sentences produced by the NSs and by the NNSs at pre-test and post-test and rated them on a 7-point Likert scale for degree of FA (1 = “native” , 7 = “heavy foreign accent”). In experiment 2, they heard paired pre-test/post-test sentences (i.e. produced by the same NNS at pre-test and posttest) and indicated which of the two sounded more native-like. Then, they stated their judgment confidence level on a 7-point scale (1 = “unsure”, 7 = “sure”). Results indicated a slight, non-significant improvement in perceived FA after SA. However, a significant decrease was found in pronunciation accuracy scores after SA. Measures of pronunciation accuracy and FA ratings were also found to be strongly correlated. These findings are discussed in light of the often reported mixed results as regards pronunciation improvement during short-term immersion

    A computational model for studying L1’s effect on L2 speech learning

    Get PDF
    abstract: Much evidence has shown that first language (L1) plays an important role in the formation of L2 phonological system during second language (L2) learning process. This combines with the fact that different L1s have distinct phonological patterns to indicate the diverse L2 speech learning outcomes for speakers from different L1 backgrounds. This dissertation hypothesizes that phonological distances between accented speech and speakers' L1 speech are also correlated with perceived accentedness, and the correlations are negative for some phonological properties. Moreover, contrastive phonological distinctions between L1s and L2 will manifest themselves in the accented speech produced by speaker from these L1s. To test the hypotheses, this study comes up with a computational model to analyze the accented speech properties in both segmental (short-term speech measurements on short-segment or phoneme level) and suprasegmental (long-term speech measurements on word, long-segment, or sentence level) feature space. The benefit of using a computational model is that it enables quantitative analysis of L1's effect on accent in terms of different phonological properties. The core parts of this computational model are feature extraction schemes to extract pronunciation and prosody representation of accented speech based on existing techniques in speech processing field. Correlation analysis on both segmental and suprasegmental feature space is conducted to look into the relationship between acoustic measurements related to L1s and perceived accentedness across several L1s. Multiple regression analysis is employed to investigate how the L1's effect impacts the perception of foreign accent, and how accented speech produced by speakers from different L1s behaves distinctly on segmental and suprasegmental feature spaces. Results unveil the potential application of the methodology in this study to provide quantitative analysis of accented speech, and extend current studies in L2 speech learning theory to large scale. Practically, this study further shows that the computational model proposed in this study can benefit automatic accentedness evaluation system by adding features related to speakers' L1s.Dissertation/ThesisDoctoral Dissertation Speech and Hearing Science 201

    Directions for the future of technology in pronunciation research and teaching

    Get PDF
    This paper reports on the role of technology in state-of-the-art pronunciation research and instruction, and makes concrete suggestions for future developments. The point of departure for this contribution is that the goal of second language (L2) pronunciation research and teaching should be enhanced comprehensibility and intelligibility as opposed to native-likeness. Three main areas are covered here. We begin with a presentation of advanced uses of pronunciation technology in research with a special focus on the expertise required to carry out even small-scale investigations. Next, we discuss the nature of data in pronunciation research, pointing to ways in which future work can build on advances in corpus research and crowdsourcing. Finally, we consider how these insights pave the way for researchers and developers working to create research-informed, computer-assisted pronunciation teaching resources. We conclude with predictions for future developments

    Flawed self-assessment: investigating self- and other-perception of second language speech

    Get PDF
    This study targeted the relationship between self- and other-assessment of accentedness and comprehensibility in second language (L2) speech, extending prior social and cognitive research documenting weak or non-existing links between people's self-assessment and objective measures of performance. Results of two experiments (N = 134) revealed mostly inaccurate self-assessment: speakers at the low end of the accentedness and comprehensibility scales overestimated their performance; speakers at the high end of each scale underestimated it. For both accent and comprehensibility, discrepancies in self- versus other-assessment were associated with listener-rated measures of phonological accuracy and temporal fluency but not with listener-rated measures of lexical appropriateness and richness, grammatical accuracy and complexity, or discourse structure. Findings suggest that inaccurate self-assessment is linked to the inherent complexity of L2 perception and production as cognitive skills and point to several ways of helping L2 speakers align or calibrate their self-assessment with their actual performance

    English read by Japanese phonetic corpus: an interim report

    Get PDF
    The primary purpose of this paper is to explain the procedure of developing the English Read by Japanese Phonetic Corpus. A series of preliminary studies (Makino 2007, 2008, 2009) made it clear that a phonetically-transcribed computerized corpus of Japanese speakers’ English speech was worth making. Because corpus studies on L2 pronunciation have been very rare, we intend to fill this gap. For the corpus building, the 1,902 sentence files in the English Read by Japanese speech database scored for their individual sounds by American English teachers trained in phonetics in Minematsu, et al. (2002b) have been chosen. The files were pre-processed with the Penn Phonetics Lab Forced Aligner to generate Praat TextGrids where target English words and phonemes were forced-aligned to the speech files. Two additional tiers (actual phones and substitutions) were added to those TextGrids, the actual phones were manually transcribed and the other tiers were aligned to that tier. Then the TextGrids were imported to ELAN, which has a much better searching functionality. So far, fewer than 10% of the files have been completed and the corpus-building is still in its initial stage. The secondary purpose of this paper is to report on some findings from the small part of the corpus that has been completed. Although it is still premature to talk of any tendency in the corpus, it is worth noting that we have found evidence of phenomena which are not readily predicted from L1 phonological transfer, such as the spirantization of voiceless plosives, which is not considered normal in the pronunciation of Japanese

    Persian ITAs and Speech Comprehensibility: Using CAPT for Pronunciation Improvement

    Get PDF
    Abstract It has been shown in the past that International Teaching Assistants (ITAs) struggle with phonological and communication issues in the classroom (Pickering, 1999; 2001). This issue leads to misunderstandings between ITAs and undergraduate students, frustrating them both as well as the parents of the students and the departments. However, studies have shown that with the right training, ITAs can focus on suprasegmental features, improving their speech comprehensibility and intelligibility (Gorusch, 2011). This study investigates the effect of Computer Assisted Pronunciation Teaching (CAPT) via tutorial videos and visual feedback on the improvement of ITAs’ speech comprehensibility. Across 5 US universities, 60 Persian ITAs, a video group (n=20), a visual feedback group(n=21), and a control group (n=19), completed an oral production pretest and recorded five diagnostic sentences plus spontaneous speech files. Over the next six weeks, all groups received in-person non-CAPT instruction, but the video group received and watched extra eight tutorial videos designed to target suprasegmental features and the feedback group was exposed to Praat visual feedback. Participants were also paired with a pronunciation tutor who provided instruction and feedback once a week. A perception posttest was administered, and the same 5 sentences with the spontaneous talk were once again recorded. The pre-and post-treatment sentences were then rated by 169 undergraduate students for comprehensibility. The findings of this study provide a greater understanding of how explicit instruction of pronunciation through CAPT can improve the speech comprehensibility of ITAs. The number of international people in academic and professional contexts is rising, it is necessary to guide them through appropriate instruction to improve their communication quality. The results of this study suggest that even short intervention programs that include targeted in-person tutoring, tutorial videos, and visual feedback may improve ITAs’ communications. Results also imply the need for pronunciation support for ITAs in their respective academic institutions

    The Effect of Speech Elicitation Method on Second Language Phonemic Accuracy

    Get PDF
    The present study, a One-Group Posttest-Only Repeated-Measures Design, examined the effect of speech elicitation method on second language (L2) phonemic accuracy of high functional load initial phonemes found in frequently occurring nouns in American English. This effect was further analyzed by including the variable of first language (L1) to determine if L1 moderated any effects found. The data consisted of audio recordings of 61 adult English learners (ELs) enrolled in English for Academic Purposes (EAP) courses at a large, public, post-secondary institution in the United States. Phonemic accuracy was judged by two independent raters as either approximating a standard American English (SAE) pronunciation of the intended phoneme or not, thus a dichotomous scale, and scores were assigned to each participant in terms of the three speech elicitation methods of word reading, word repetition, and picture naming. Results from a repeated measures ANOVA test revealed a statistically significant difference in phonemic accuracy (F(1.47, 87.93) = 25.94, p = .000) based on speech elicitation method, while the two-factor mixed design ANOVA test indicated no statistically significant differences for the moderator variable of native language. However, post-hoc analyses revealed that mean scores of picture naming tasks differed significantly from the other two elicitation methods of word reading and word repetition. Moreover, the results of this study should heighten attention to the role that various speech elicitation methods, or input modalities, might play on L2 productive accuracy. Implications for practical application suggest that caution should be used when utilizing pictures to elicit specific vocabulary words–even high-frequency words–as they might result in erroneous productions or no utterance at all. These methods could inform pronunciation instructors about best teaching practices when pronunciation accuracy is the objective. Finally, the impact of L1 on L2 pronunciation accuracy might not be as important as once thought

    English Pronunciation Skills and Intelligibility of Native Russian Speakers

    Get PDF
    The rapid growth of the native Russian-speaking population in the United States created an urgent need to improve their pronunciation skills and increase their second language speech intelligibility. The purpose of this field project was to present a research-based curriculum, with the use of embedded technology, that can be utilized to improve the American English pronunciation skills and intelligibility of native Russian speakers. The body of analyzed scholarship demonstrated that speech intelligibility is the primary goal of second language pronunciation teaching, justified the importance of research-based pronunciation teaching, emphasized the significant role of technology in pronunciation research and teaching, and revealed the lack and need for resources teaching American English pronunciation to native Russian speakers. The Affective Filter Hypothesis, one of five hypotheses that form Krashen’s Theory of Second Language Acquisition, is used as the theoretical framework for this field project. The designed field project English Pronunciation with ZOYA is an eLearning Platform grounded on a research-based curriculum tailored according to the related in-depth literature review and personal teaching experience to improve American English pronunciation skills and intelligibility of adult native Russian-speaking learners. The eLearning Platform is available at englishpronunciationwithzoya.tilda.ws. The Platform is designed as a user-friendly website with an easy-to-navigate module structure course curriculum creating a safe self-paced learning environment for pronunciation teaching and learning. The developed curriculum is recommended for adult intermediate to advanced proficiency level (B1 - C2, CEFR) native Russian-speaking learners, their instructors, and curriculum developers interested in improving pronunciation skills and intelligibility of native Russian-speaking learners

    The Effects of English Pronunciation Instruction on Listening Skills among Vietnamese Learners

    Get PDF
    Listening has been a neglected skill in both second language research and teaching practice (Khaghaninejad & Maleki, 2015; Nowrouzi, Tam, Zareian & Nimehchisalem, 2015) and recent research has shown that second language (L2) listening difficulties might relate to phonological problems besides syntactic and lexical knowledge (e.g., Suristro, 2018). There have been some empirical studies examining the effects of phonetic instruction on perceptual skills showing promising results (e.g., Aliaga-Garcia & Mora, 2009; Linebaugh & Roche, 2013). This study contributes to this area with a focus on investigating the impacts of English pronunciation instruction on listening skills among Vietnamese English as a Foreign Language (EFL) learners, targeting the four English phonemes: word-final stop consonants /t/-/d/, the lax high front vowel /ɪ/ and the tense high front vowel /i/. Particularly, it examines whether pronunciation instruction would have effects on (a) students’ abilities to listen to and distinguish target phonemes, and (b) students’ abilities to listen to and dictate monosyllabic words containing the target sounds. To examine the effects of mere explicit pronunciation instruction on perception, the study excluded perceptual training from the treatment. Sixteen Vietnamese learners were recruited to join the study, divided into two groups: an experimental group (n=10) and a control group (n=6). Only the experimental group received a five-hour online phonetic instruction emphasizing the four English target phonemes and other distractors. A pre-test and a post-test in listening skills measured the difference between and within groups. In addition, a post-instructional survey was administered to collect qualitative data in an attempt to explain the study results. Non-parametric tests (Wilcoxon rank sum and Wilcoxon signed rank tests) were used to analyze the quantitative data. The study results revealed that there was no difference in listening performance between the two groups, and within each group, which might suggest unclear impact of pronunciation instruction on perceptual skills. Perceptual training, which has often been used in research on pronunciation instruction, is discussed and suggestions for future research are made

    ShefCE: A Cantonese-English Bilingual Speech Corpus for Pronunciation Assessment

    Get PDF
    This paper introduces the development of ShefCE: a Cantonese-English bilingual speech corpus from L2 English speakers in Hong Kong. Bilingual parallel recording materials were chosen from TED online lectures. Script selection were carried out according to bilingual consistency (evaluated using a machine translation system) and the distribution balance of phonemes. 31 undergraduate to postgraduate students in Hong Kong aged 20-30 were recruited and recorded a 25-hour speech corpus (12 hours in Cantonese and 13 hours in English). Baseline phoneme/syllable recognition systems were trained on background data with and without the ShefCE training data. The final syllable error rate (SER) for Cantonese is 17.3% and final phoneme error rate (PER) for English is 34.5%. The automatic speech recognition performance on English showed a significant mismatch when applying L1 models on L2 data, suggesting the need for explicit accent adaptation. ShefCE and the corresponding baseline models will be made openly available for academic research
    corecore