2,696 research outputs found

    Using generalizability theory to investigate the variability and reliability of EFL composition scores by human raters and e-rater

    Get PDF
    ABSTRACT: Using the generalizability theory (G-theory) as a theoretical framework, this study aimed at investigating the variability and reliability of holistic scores assigned by human raters and e-rater to the same EFL essays. Eighty argumentative essays written on two different topics by tertiary level Turkish EFL students were scored holistically by e-rater and eight human raters who received a detailed rater training. The results showed that e-rater and human raters assigned significantly different holistic scores to the same EFL essays. G-theory analyses revealed that human raters assigned considerably inconsistent scores to the same EFL essays although they were given a detailed rater training and more reliable ratings were attained when e-rater was integrated in the scoring procedure. Some implications are given for EFL writing assessment practices.Utilizando la teoría de la generalización (teoría G) como marco teórico, este estudio tuvo como objetivo investigar la variabilidad y confiabilidad de los puntajes holísticos asignados por evaluadores humanos y e-rater a los mismos ensayosde inglés como lengua extranjera. Ochenta ensayos argumentativos escritos sobre dos temas diferentes por estudiantes turcos de inglés como lengua extranjera de nivel terciario fueron calificados de manera integral por un evaluador electrónico y ocho evaluadores humanos que recibieron una capacitación detallada como evaluador. Los resultados mostraron que los evaluadores electrónicos y humanos asignaron puntajes holísticos significativamente diferentes a los mismos ensayos de inglés como lengua extranjera. Los análisis de la teoría G revelaron que los evaluadores humanos asignaronpuntajes considerablemente inconsistentes a los mismos ensayos de inglés como lengua extranjera, aunque se les proporcionó una capacitación detallada para los evaluadores y se obtuvieron calificaciones más confiables cuando el evaluador electrónico se integró en el procedimiento de puntaje. Se dan algunas implicaciones para las prácticas de evaluación de escritura EFL

    Automated assessment of non-native learner essays: Investigating the role of linguistic features

    Get PDF
    Automatic essay scoring (AES) refers to the process of scoring free text responses to given prompts, considering human grader scores as the gold standard. Writing such essays is an essential component of many language and aptitude exams. Hence, AES became an active and established area of research, and there are many proprietary systems used in real life applications today. However, not much is known about which specific linguistic features are useful for prediction and how much of this is consistent across datasets. This article addresses that by exploring the role of various linguistic features in automatic essay scoring using two publicly available datasets of non-native English essays written in test taking scenarios. The linguistic properties are modeled by encoding lexical, syntactic, discourse and error types of learner language in the feature set. Predictive models are then developed using these features on both datasets and the most predictive features are compared. While the results show that the feature set used results in good predictive models with both datasets, the question "what are the most predictive features?" has a different answer for each dataset.Comment: Article accepted for publication at: International Journal of Artificial Intelligence in Education (IJAIED). To appear in early 2017 (journal url: http://www.springer.com/computer/ai/journal/40593

    The Use of Online Automated Writing Checkers among EFL Learners

    Get PDF
    Writing is regarded as a vital learning tool for all subject areas. However, it is tough for EFL students in college programmes to grasp and possess excellent writing skills. This paper describes the findings of a study conducted to understand better EFL learners’ perceptions of using online automated writing checkers (OAWCs). This study aims to elicit Learners’ perspectives on enhancing their writing skills with OAWCs. A questionnaire was provided to sixty Saudi female students in the College of Science and Arts, Unizah, Qassim University. The results demonstrate the learners’ positive perceptions of the use of these technologies. Based on the findings, educational implications are proposed for this descriptive study and future research

    The effectiveness of automated writing evaluation: a structural analysis approach

    Get PDF
    Modern advancement in learning technologies and tools has presented innovative written corrective feedback (WCF) methods based on artificial intelligence (AI) and existing corpora. Research has shown that these tools are perceived as exciting and useful by students, yet studies on their effectiveness and impact on students’ writing are relatively insufficient. To this end, the present study investigated the effectiveness of Grammarly writing assistant as perceived by 98 undergraduates who used the tool for a 14-week semester. The study adopted a questionnaire based on a modified technology acceptance model (TAM). The gathered data was analyzed using SmartPLS 3 software. The results revealed that different factors predict students’ perceptions about Grammarly and their intention to use it. Some of these factors were not presupposed. The findings imply using Grammarly as an extra learning tool rather than a basic one. It is suggested that future research on the efficacy of Grammarly should adopt longitudinal and experimental approaches

    The role of feedback in the processes and outcomes of academic writing in english as a foreign language at intermediate and advanced levels

    Get PDF
    Providing feedback on students’ texts is one of the essential components of teaching second language writing. However, whether and to what extent students benefit from feedback has been an issue of considerable debate in the literature. While many researchers have stressed its importance, others expressed doubts about its effectiveness. Regardless of these continuing and well-established debates, instructors consider feedback as a worthwhile pedagogical practice for second language learning. Based on this premise, I conducted three experimental studies to investigate the role of written feedback in Myanmar and Hungarian tertiary EFL classrooms. Additionally, I studied syntactic features and language-related error patterns in Hungarian and Myanmar students’ writing. This attempt was made to understand how students with different writing proficiency acted upon teacher and automated feedback. The first study examined the efficacy of feedback on Myanmar students’ writing over a 13-week semester and how automated feedback provided by Grammarly could be integrated into writing instruction as an assistance tool for writing teachers. Results from pre-and post-tests demonstrated that students’ writing performance improved along the lines of four assessment criteria: task achievement, coherence and cohesion, grammatical range and accuracy, and lexical range and accuracy. Further results from a written feedback analysis revealed that the free version of Grammarly provided feedback on lower-level writing issues such as articles and prepositions, whereas teacher feedback covered both lower-and higher-level writing concerns. These findings suggested a potential for integrating automated feedback into writing instruction. As limited attention was given to how feedback influences other aspects of writing development beyond accuracy, the second study examined how feedback influences the syntactic complexity of Myanmar students’ essays. Results from paired sample t-tests revealed no significant differences in the syntactic complexity of students’ writing when the comparison was made between initial and revised texts and between pre-and post-tests. These findings suggested that feedback on students’ writing does not lead them to write less structurally complex texts despite not resulting in syntactic complexity gains. The syntactic complexity of students’ revised texts varied among high-, mid-, and low-achieving students. These variations could be attributed to proficiency levels, writing prompts, genre differences, and feedback sources. The rationale for conducting the third study was based on the theoretical orientation that differential success in learners’ gaining from feedback largely depended on their engagement with the feedback rather than the feedback itself. Along these lines of research, I examined Hungarian students’ behavioural engagement (i.e., students’ uptake or revisions prompted by written feedback) with teacher and automated feedback in an EFL writing course. In addition to the engagement with form-focused feedback examined in the first study, I considered meaning-focused feedback, as feedback in a writing course typically covers both linguistic and rhetorical aspects of writing. The results showed differences in feedback focus (the teacher provided form-and meaning-focused feedback) with unexpected outcomes: students’ uptake of feedback resulted in moderate to low levels of engagement with feedback. Participants incorporated more form-focused feedback than meaning-focused feedback into their revisions. These findings contribute to our understanding of students’ engagement with writing tasks, levels of trust, and the possible impact of students’ language proficiency on their engagement with feedback. Following the results that Myanmar and Hungarian students responded to feedback on their writing differently, I designed a follow-up study to compare syntactic features of their writing as indices of their English writing proficiency. In addition, I examined language-related errors in their texts to capture the differences in the error patterns in the two groups. Results from paired sample t-tests showed that most syntactic complexity indices distinguished the essays produced by the two groups: length of production units, sentence complexity, and subordination indices. Similarly, statistically significant differences were found in language-related error patterns in their texts: errors were more prevalent in Myanmar students’ essays. The implications for research and pedagogical practices in EFL writing classes are discussed with reference to the rationale for each study

    Exploring the College EFL Self-access Writing Mode Based on Automated Feedback

    Get PDF
    The present study is intended to construct a college EFL self-access writing mode based on automated feedback under the guidance of Formative Assessment Theory and Autonomous Learning Theory and attempts to apply it into college EFL teaching practice. Findings of this empirical-based study suggest that this self-access writing mode contributes to the enhancement of students’ English writing competence, English writing motivation as well as their autonomy in self-revision

    Exploring the effectiveness of ChatGPT-based feedback compared with teacher feedback and self-feedback: Evidence from Chinese to English translation

    Full text link
    ChatGPT,a cutting-edge AI-powered Chatbot,can quickly generate responses on given commands. While it was reported that ChatGPT had the capacity to deliver useful feedback, it is still unclear about its effectiveness compared with conventional feedback approaches,such as teacher feedback (TF) and self-feedback (SF). To address this issue, this study compared the revised Chinese to English translation texts produced by Chinese Master of Translation and Interpretation (MTI) students,who learned English as a Second/Foreign Language (ESL/EFL), based on three feedback types (i.e., ChatGPT-based feedback, TF and SF). The data was analyzed using BLEU score to gauge the overall translation quality as well as Coh-Metrix to examine linguistic features across three dimensions: lexicon, syntax, and cohesion.The findings revealed that TF- and SF-guided translation texts surpassed those with ChatGPT-based feedback, as indicated by the BLEU score. In terms of linguistic features,ChatGPT-based feedback demonstrated superiority, particularly in enhancing lexical capability and referential cohesion in the translation texts. However, TF and SF proved more effective in developing syntax-related skills,as it addressed instances of incorrect usage of the passive voice. These diverse outcomes indicate ChatGPT's potential as a supplementary resource, complementing traditional teacher-led methods in translation practice

    20 years of technology and language assessment in Language Learning & Technology

    Get PDF

    Generative artificial intelligence in EFL writing: A pedagogical stance of pre-service teachers and teacher trainers

    Get PDF
    This study examines pre-service English language teachers’ grounds and connections between the use of generative Artificial Intelligence (AI) tools in EFL writing skills and future prospects to integrate them into their teaching practices. Employing a qualitative research paradigm, a researcher-developed survey was used to elicit the perspectives of 28 pre-service English language teachers and 10 teacher trainers. The stages of qualitative data analysis were followed, emergent ideas embedded in the responses were labeled and the codes were clustered into broader themes to obtain a description of their reflections. This study documented reflections on the transformative impact of generative AI in EFL writing. Benefits were reported considering the use of AI tools to overcome writer’s block and get language support, and instantaneous and personalized feedback to the texts. Foregrounding concerns regarding academic misconduct, a need was highlighted for ethical guidelines and enhancement to AI literacy to ensure the validity of AI-generated content. Further, they suggested reformulating assessment and evaluation in EFL writing skills and moving away from result-oriented exams suggesting the adoption of performance-based and process-oriented assessments. Accordingly, ethical and pedagogical implications were offered to adopt a critical stance to improve AI literacy skills in EFL writing development

    Automated writing evaluation tools for Indonesian undergraduate English as a foreign language students’ writing

    Get PDF
    Nowadays, many computer programs are used in the teaching of writing in the context of English as a foreign language (EFL). One of the functions of the computer programs is to provide feedback to EFL students’ writing so that the quality of their writing can be improved. This study aimed to investigate whether the use of free automated writing evaluation (AWE) tools affect undergraduate EFL students’ writing skills. In this experimental study, 35 Indonesian undergraduate students of English education department were asked to use two AWE tools, Grammarly and Grammark, in the writing course over four months. Data for this study were collected by using tests and questionnaire. Pre-test, middle test, and post-test were administered to examine the students’ writing skill improvement. The findings indicate that the sequenced use of two AWE tools, Grammarly followed by Grammark, had a beneficial effect on students’ writing skill improvement. This study confirms the benefits of free AWE tools in enhancing EFL students’ writing skills
    corecore