14,137 research outputs found

    Automatic Pronunciation Assessment -- A Review

    Full text link
    Pronunciation assessment and its application in computer-aided pronunciation training (CAPT) have seen impressive progress in recent years. With the rapid growth in language processing and deep learning over the past few years, there is a need for an updated review. In this paper, we review methods employed in pronunciation assessment for both phonemic and prosodic. We categorize the main challenges observed in prominent research trends, and highlight existing limitations, and available resources. This is followed by a discussion of the remaining challenges and possible directions for future work.Comment: 9 pages, accepted to EMNLP Finding

    Waltham Forest College: report from the Inspectorate (FEFC inspection report; 86/95 and 92/99)

    Get PDF
    Comprises two Further Education Funding Council (FEFC) inspection reports for the periods 1994-95 and 1998-99

    Teacher and Student Perceptions of DynEd Multimedia Courseware: An Evaluation of CALL in an American Technical College

    Get PDF
    This study examines the perceptions of teachers and students using the DynEd Multimedia Courseware in adult ESL workshops in an American technical college setting. The goals of the study were to determine (a) the teachers’ perceptions of their training to facilitate DynEd, (b) the teachers’ and students’ perceptions of the facilitators’ role in supporting students, and (c) the teachers’ and students’ perceptions of DynEd’s appropriateness for adult learners at our institution. Data from questionnaires and focus group interviews were analyzed using Chapelle’s (2001) Criteria for CALL Task Appropriateness as the conceptual framework. Findings suggest that both teachers and students need training and support to use DynEd effectively. Findings also indicate that the students’ perceptions of DynEd are more positive than the teachers’ perceptions

    Motivating EFL Students with Conversation Data

    Get PDF
    Motivating learners of English as a Foreign Language (EFL) to improve their speaking fluency is challenging in environments where institutions emphasize reading and listening test performance. The focus tends to shift to strategic reading and listening first in order to attain acceptable test results, often at the expense of communicative competence. Computer Assisted Language Learning (CALL) is well positioned to assess and develop communicative competence for EFL learners, and to motivate them to speak. This article introduces the Objective Subjective (OS) Scoring system, a CALL system which sets clear immediate goals on the path to better communicative competence with data from videoed conversation sessions. It motivates learners to improve on their data in every consecutive conversation session, whereby an environment is created which facilitates conversation practice as well as individual error correction

    A corpus-based study of Spanish L2 mispronunciations by Japanese speakers

    Get PDF
    In a companion paper (Carranza et al.) submitted to this conference we discuss the importance of collecting specific L1-L2 speech corpora for the sake of developing effective Computer Assisted Pronunciation Training (CAPT) programs. In this paper we examine this point more deeply by reporting on a study that was aimed at compiling and analysing such a corpus to draw up an inventory of recurrent pronunciation errors to be addressed in a CAPT application that makes use of Automatic Speech Recognition (ASR). In particular we discuss some of the results obtained in the analyses of this corpus and some of the methodological issues we had to deal with. The corpus features 8.9 hours of spontaneous, semi-spontaneous and read speech recorded from 20 Japanese students of Spanish L2. The speech data was segmented and transcribed at the orthographic, canonical-phonemic and narrow-phonetic level using Praat software [1]. We adopted the SAMPA phonemic inventory for the phonemic transcription adapted to Spanish [2] and added 11 new symbols and 7 diacritics taken from X-SAMPA [3] for the narrow-phonetic transcription. Non linguistic phenomena and incidents were also annotated with XML tags in independent tiers. Standards for transcribing and annotating non-native spontaneous speech ([4], [5]), as well as the error encoding system used in the project will be addressed. Up to 13410 errors were segmented, aligned with the canonical-phonemic tier and the narrow-phonetic tier, and annotated following an encoding system that specifies the type of error (substitutions, insertion and deletion), the affected phone and the preceding and following phonemic contexts where the error occurred. We then carried out additional analyses to check the accuracy of the transcriptions by asking two other annotators to transcribe a subset of the speech material. We calculated intertranscriber agreement coefficients. The data was automatically recovered by Praat scripts and statistically analyzed with R. The resulting frequency ratios obtained for the most frequent errors and the most frequent contexts of appearance were statistically tested to determine their significance values. We report on the analyses of the combined annotations and draw up an inventory of errors that should be addressed in the training. We then consider how ASR can be employed to properly detect these errors. Furthermore, we suggest possible exercises that may be included in the training to improve the errors identified
    • 

    corecore