15,982 research outputs found
Informing Writing: The Benefits of Formative Assessment
Examines whether classroom-based formative writing assessment - designed to provide students with feedback and modified instruction as needed - improves student writing and how teachers can improve such assessment. Suggests best practices
Beyond the design of automated writing evaluation: Pedagogical practices and perceived learning effectiveness in EFL writing classes
Automated writing evaluation (AWE) software is designed to provide instant computer-generated scores for a submitted essay along with diagnostic feedback. Most studies on AWE have been conducted on psychometric evaluations of its validity; however, studies on how effectively AWE is used in writing classes as a pedagogical tool are limited. This study employs a naturalistic classroom-based approach to explore the interaction between how an AWE program, MY Access!, was implemented in three different ways in three EFL college writing classes in Taiwanand how students perceived its effectiveness in improving writing. The findings show that, although the implementation of AWE was not in general perceived very positively by the three classes, it was perceived comparatively more favorably when the program was used to facilitate studentsâ early drafting and revising process, followed by human feedback from both the teacher and peers during the later process. This study also reveals that the autonomous use of AWE as a surrogate writing coach with minimal human facilitation caused frustration to students and limited their learning of writing. In addition, teachersâ attitudes toward AWE use and their technology-use skills, as well as studentsâ learner characteristics and goals for learning to write, may also play vital roles in determining the effectiveness of AWE. With limitations inherent in the design of AWE technology, language teachers need to be more critically aware that the implementation of AWE requires well thought-out pedagogical designs and thorough considerations for its relevance to the objectives of the learning of writing
Recommended from our members
How to design for persistence and retention in MOOCs?
Design of educational interventions is typically carried out following a design cycle involving phases of investigation, conceptualization, prototyping, implementation, execution and evaluation. This cycle can be applied at different levels of granularity e.g. learning activity, module, course or programme.
In this paper we consider an aspect of learner behavior that can be critical to the success of many MOOCs i.e. their persistence to study, and the related theme of learner retention. We reflect on the impact that consideration of these can have on design decisions at different stages in the design cycle with the aim of en-hancing MOOC design in relation to learner persistence and retention, with particular attention to the European context
Edu-ConvoKit: An Open-Source Library for Education Conversation Data
We introduce Edu-ConvoKit, an open-source library designed to handle
pre-processing, annotation and analysis of conversation data in education.
Resources for analyzing education conversation data are scarce, making the
research challenging to perform and therefore hard to access. We address these
challenges with Edu-ConvoKit. Edu-ConvoKit is open-source
(https://github.com/stanfordnlp/edu-convokit ), pip-installable
(https://pypi.org/project/edu-convokit/ ), with comprehensive documentation
(https://edu-convokit.readthedocs.io/en/latest/ ). Our demo video is available
at: https://youtu.be/zdcI839vAko?si=h9qlnl76ucSuXb8- . We include additional
resources, such as Colab applications of Edu-ConvoKit to three diverse
education datasets and a repository of Edu-ConvoKit related papers, that can be
found in our GitHub repository.Comment: https://github.com/stanfordnlp/edu-convokit
https://edu-convokit.readthedocs.io/en/latest
Towards Effective Integration and Positive Impact of Automated Writing Evaluation in L2 Writing
The increasing dominance of English has elevated the need to develop an ability to effectively communicate in writing, and this has put a strain on second language education programs worldwide. Faced with time-consuming and copious commenting on student drafts and inspired by the promise of computerized writing assessment, many educational technology enthusiasts are looking to A WE [automated writing evaluation] as a silver bullet for language and literacy development (Warschauer & Ware, 2006, p. 175). This chapter reviews what AWE offers for learners and teachers and raises a number of controversies regarding A WE effectiveness with the underlying message that clear milestone targets need to be set with respect to A WE development, implementation, and evaluation in order to ensure positive impact of this technology on L2 writing. In support of this message, the chapter introduces an example-lADE, a prototype of contextbased A WE conceptualized and operationalized to address latent issues through a synthesis of theoretical premises and learning needs. Multifaceted empirical evaluation of lADE further provides insights into processes triggered by interaction with A WE technology and foregrounds a call for future research needed to inform effective application of AWE in L2 writing classrooms
Formative assessment feedback to enhance the writing performance of Iranian IELTS candidates: Blending teacher and automated writing evaluation
With the incremental integration of technology in writing assessment, technology-generated feedback has found its way to take further steps toward replacing human corrective feedback and rating. Yet, further investigation is deemed necessary regarding its potential use either as a supplement to or replacement for human feedback. This study aims to investigate the effect of blending teacher and automated writing evaluation, as formative assessment feedback, on enhancing the writing performance among Iranian IELTS candidates. In this explanatory mixed-methods research, three groups of Iranian intermediate learners (N=31) completed six IELTS writing tasks during six consecutive weeks and received automated, teacher, and blended (automated + teacher) feedback modes respectively on different components of writing (task response, coherence and cohesion, lexical resource, grammatical range and accuracy). A structured written interview was also conducted to explore learnersâ perception (attitude, clarity, preference) of the mode of feedback they received. Findings revealed that students who received teacher-only and blended feedback performed better in writing. Also, the blended feedback group outperformed the others regarding task response, the teacher feedback group in cohesion and coherence, and the automated feedback group in lexical resource. The analysis of the interviews revealed that the majority of the learners confirmed the clarity of all feedback modes and learnersâ attitude about feedback modes was positive although they highly preferred the blended one. The findings suggest new ideas to facilitate learning and assessing writing and support the evidence that teachers can provide comprehensive, accurate, and continuous feedback as a means of formative assessment
Qualitative and mixed methodology for online language teaching research
This paper provides an overview of CALL (Computer Assisted Language Learning), its history and current developments. It presents a rationale for moving CALL research forward, and outlines a particular approach to researching online language teaching and learning: the use of qualitative methodology. It is in this historical context that a case for more qualitative and integrative research designs is made. Examples of qualitative and mixed method studies are taken from the context of language teaching at the Open University in the United Kingdom, the largest institution of its kind in Europe, with a remit of teaching all subjects at university level to adults, regardless of their prior qualifications. With the help of these examples the scope and promise of qualitative approaches are discussed
Sustaining Knowledge Building as a Principle-Based Innovation at an Elementary School
This study explores Knowledge Building as a principle-based innovation at an elementary school and makes a case for a principle- versus procedure-based approach to educational innovation, supported by new knowledge media. Thirty-nine Knowledge Building initiatives, each focused on a curriculum theme and facilitated by nine teachers over eight years, were analyzed using measures of student discourse in a Knowledge Building environment--Knowledge Forum. Results were analyzed from the perspective of student, teacher, and principal engagement to identify conditions for Knowledge Building as a school-wide innovation. Analyses of student discourse showed interactive and complementary contributions to a community knowledge space, conceptual content of growing scope and depth, and collective responsibility for knowledge advancement. Analyses of teacher and principal engagement showed supportive conditions such as shared vision; trust in student competencies to the point of enabling transfer of agency for knowledge advancement to students; ever-deepening understanding of Knowledge Building principles; knowledge emergent through collective responsibility; a coherent systems perspective; teacher professional Knowledge Building communities; and leadership supportive of innovation at all levels. More substantial advances for students were related to years of teachersâ experience in this progressive knowledge-advancing enterprise
Use of automated coding methods to assess motivational behaviour in education
Teachersâ motivational behaviour is related to important student outcomes. Assessing teachersâ motivational behaviour has been helpful to improve teaching quality and enhance student outcomes. However, researchers in educational psychology have relied on self-report or observer ratings. These methods face limitations on accurately and reliably assessing teachersâ motivational behaviour; thus restricting the pace and scale of conducting research. One potential method to overcome these restrictions is automated coding methods. These methods are capable of analysing behaviour at a large scale with less time and at low costs. In this thesis, I conducted three studies to examine the applications of an automated coding method to assess teacher motivational behaviours. First, I systematically reviewed the applications of automated coding methods used to analyse helping professionalsâ interpersonal interactions using their verbal behaviour. The findings showed that automated coding methods were used in psychotherapy to predict the codes of a well-developed behavioural coding measure, in medical settings to predict conversation patterns or topics, and in education to predict simple concepts, such as the number of open/closed questions or class activity type (e.g., group work or teacher lecturing). In certain circumstances, these models achieved near human level performance. However, few studies adhered to best-practice machine learning guidelines. Second, I developed a dictionary of teachersâ motivational phrases and used it to automatically assess teachersâ motivating and de-motivating behaviours. Results showed that the dictionary ratings of teacher need support achieved a strong correlation with observer ratings of need support (rfull dictionary = .73). Third, I developed a classification of teachersâ motivational behaviour that would enable more advanced automated coding of teacher behaviours at each utterance level. In this study, I created a classification that includes 57 teacher motivating and de-motivating behaviours that are consistent with self-determination theory. Automatically assessing teachersâ motivational behaviour with automatic coding methods can provide accurate, fast pace, and large scale analysis of teacher motivational behaviour. This could allow for immediate feedback and also development of theoretical frameworks. The findings in this thesis can lead to the improvement of student motivation and other consequent student outcomes
- âŠ