192,432 research outputs found

    Student engagement with teacher and automated feedback on L2 writing

    Get PDF
    Research on feedback in second language writing has grown enormously in the past 20 years and has expanded to include studies comparing human raters and automated writing evaluation (AWE) programs. However, we know little about the ways students engage with these different sources of feedback or their relative impact on writing over time. This naturalistic case study addresses this gap, looking at how two Chinese students of English engage with both teacher and AWE feedback on their writing over a 16-week semester. Drawing on student texts, teacher feedback, AWE feedback, and student interviews, we identify the strengths and weaknesses of both types of feedback and show how engagement is a crucial mediating variable in the use students make of feedback and the impact it has on their writing development. We argue that engagement is a key factor in the success of formative assessment in teaching contexts where multiple drafting is employed. Our results show that different sources of formative assessment have great potential in facilitating student involvement in writing tasks and we highlight some of these pedagogical implications for promoting student engagement with teacher and AWE feedback

    Using language technologies to support individual formative feedback

    Get PDF
    In modern educational environments for group learning it is often challenging for tutors to provide timely individual formative feedback to learners. Taking the case of undergraduate Medicine, we have found that formative feedback is generally provided to learners on an ad-hoc basis, usually at the group, rather than individual, level. Consequently, conceptual issues for individuals often remain undetected until summative assessment. In many subject domains, learners will typically produce written materials to record their study activities. One way for tutors to diagnose conceptual development issues for an individual learner would be to analyse the contents of the learning materials they produce, which would be a significant undertaking. CONSPECT is one of six core web-based services of the Language Technologies for Lifelong Learning (LTfLL) project. This European Union Framework 7-funded project seeks to make use of Language Technologies to provide semi-automated analysis of the large quantities of text generated by learners through the course of their learning. CONSPECT aims to provide formative feedback and monitoring of learners’ conceptual development. It uses a Natural Language Processing method, based on Latent Semantic Analysis, to compare learner materials to reference models generated from reference or learning materials. This paper provides a summary of the service development alongside results from validation of Version 1.0 of the service

    A report on the piloting of a novel computer-based medical case simulation for teaching and formative assessment of diagnostic laboratory testing

    Get PDF
    Objectives: Insufficient attention has been given to how information from computer-based clinical case simulations is presented, collected, and scored. Research is needed on how best to design such simulations to acquire valid performance assessment data that can act as useful feedback for educational applications. This report describes a study of a new simulation format with design features aimed at improving both its formative assessment feedback and educational function. Methods: Case simulation software (LabCAPS) was developed to target a highly focused and well-defined measurement goal with a response format that allowed objective scoring. Data from an eight-case computer-based performance assessment administered in a pilot study to 13 second-year medical students was analyzed using classical test theory and generalizability analysis. In addition, a similar analysis was conducted on an administration in a less controlled setting, but to a much large sample (n=143), within a clinical course that utilized two random case subsets from a library of 18 cases. Results: Classical test theory case-level item analysis of the pilot assessment yielded an average case discrimination of 0.37, and all eight cases were positively discriminating (range=0.11–0.56). Classical test theory coefficient alpha and the decision study showed the eight-case performance assessment to have an observed reliability of σ=G=0.70. The decision study further demonstrated that a G=0.80 could be attained with approximately 3 h and 15 min of testing. The less-controlled educational application within a large medical class produced a somewhat lower reliability for eight cases (G=0.53). Students gave high ratings to the logic of the simulation interface, its educational value, and to the fidelity of the tasks. Conclusions: LabCAPS software shows the potential to provide formative assessment of medical students’ skill at diagnostic test ordering and to provide valid feedback to learners. The perceived fidelity of the performance tasks and the statistical reliability findings support the validity of using the automated scores for formative assessment and learning. LabCAPS cases appear well designed for use as a scored assignment, for stimulating discussions in small group educational settings, for self-assessment, and for independent learning. Extension of the more highly controlled pilot assessment study with a larger sample will be needed to confirm its reliability in other assessment applications

    Semi-automated assessment of programming languages for novice programmers

    Get PDF
    There has recently been an increased emphasis on the importance of learning programming languages, not only in higher education but also in secondary schools. Students of a variety of departments such as physics, mathematics and engineering have also started learning programming languages as part of their academic courses. Assessment of students programming solutions is therefore important for developing their programming skills. Many Computer Based Assessment (CBA) systems utilise multiple-choice questions (MCQ) to evaluate students performance. However, MCQs lack the ability to comprehensively assess students knowledge. Thus, other forms of programming solutions are required to assess students knowledge. This research aims to develop a semi-automated assessment framework for novice programmers, utilising a computer to support the marking process. The research also focuses on ensuring the consistency of feedback. A novel marking process model is developed based on the semi-automated assessment approach which supports a new way of marking, termed segmented marking . A study is carried out to investigate and demonstrate the feasibility of the segmented marking technique. In addition, the new marking process model is developed based on the results of the feasibility study, and two novel marking process models are presented based on segmented marking, namely the full-marking and partial-marking process models. The Case-Based Reasoning (CBR) cycle is adopted in the marking process models in order to ensure the consistency of feedback. User interfaces of the prototype marking tools (full and partial) are designed and developed based on the marking process models and the user interface design requirements. The experimental results show that the full and partial marking techniques are feasible for use in formative assessment. Furthermore, the results also highlight that the tools are capable of providing consistent and personalised feedback and that they considerably reduce markers workload

    Using LMS “Elsakti’ as an Effective Assessment Medium for English EFL Learning

    Get PDF
    This study was carried out to seek information on the roles of Elsakti as a medium of assessment in English EFL learning focusing on the opportunities and challenges perceived by EFL learners. It employed a case study research using the interview to obtain in-depth and detailed information from the participants. Six EFL learners of Universitas Pancasakti Tegal were involved in this study conducting telephone interviews. This result reveals that Elskti has been found to support digitalization where students as well lecturers have higher chances to optimally use digital tools to help them work with assignments and other work more effectively and efficiently. Its features also have essential roles as a medium of assessment in language teaching as they facilitated the creation of a fun learning environment, practicality, automated scoring, and direct feedback. The findings of this study also designate that Elsakti is powerful and potential to be utilized to implement formative assessment more often since it plays important roles in the EFL learning and teaching process, including formative assessment. Meanwhile, the challenges perceived by participating learners in this study were majorly related to the technical problems that the learners did not encounter often or at every meeting

    Semi-automatic assessment of basic SQL statements

    Get PDF
    Learning and assessing the Structured Query Language (SQL) is an important step in developing students' database skills. However, due to the increasing numbers of students learning SQL, assessing and providing detailed feedback to students' work can be time consuming and prone to errors. The main purpose of this research is to reduce or remove as many of the repetitive tasks in any phase of the assessment process of SQL statements as possible to achieve the consistency of marking and feedback on SQL answers.This research examines existing SQL assessment tools and their limitations by testing them on SQL questions, where the results reveal that students must attaint essential skills to be able to formulate basic SQL queries. This is because formulating SQL statements requires practice and effort by students. In addition, the standard steps adopted in many SQL assessment tools were found to be insufficient in successfully assessing our sample of exam scripts. The analysis of the outcomes identified several ways of solving the same query and the categories of errors based on the common student mistakes in SQL statements. Based on this, this research proposes a semi-automated assessment approach as a solution to improve students’ SQL formulation process, ensure the consistency of SQL grading and the feedback generated during the marking process. The semi-automatic marking method utilities both the Case-Based Reasoning (CBR) system and Rule-Based Reasoning (RBR) system methodologies. The approach aims to reduce the workload of marking tasks by reducing or removing as many of the repetitive tasks in any phase of the marking process of SQL statements as possible. It also targets the improvement of feedback dimensions that can be given to students.In addition, the research implemented a prototype of the SQL assessment framework which supports the process of the semi-automated assessment approach. The prototype aims to enhance the SQL formulation process for students and minimise the required human effort for assessing and evaluating SQL statements. Furthermore, it aims to provide timely, individual and detailed feedback to the students. The new prototype tool allows students to formulate SQL statements using the point-and-click approach by using the SQL Formulation Editor (SQL-FE). It also aims to minimise the required human effort for assessing and evaluating SQL statements through the use of the SQL Marking Editor (SQL-ME). To ensure the effectiveness of the SQL-FE tool, the research conducted two studies which compared the newly implemented tool with the paper-based manual method in the first study (pilot study), and with the SQL Management Studio tool in the second study (full experiment). The results provided reasonable evidence that using SQL-FE can have a beneficial effect on formulating SQL statements and improve students’ SQL learning. The results also showed that students were able to solve and formulate the SQL query on time and their performance showed significant improvement. The research also carried out an experiment to examine the viability of the SQL Marking Editor by testing the SQL partial marking, grouping of identical SQL statements, and the resulting marking process after applying the generic marking rules. The experimental results presented demonstrated that the newly implemented editor was able to provide consistent marking and individual feedback for all SQL parts. This means that the main aim of this research has been fulfilled, since the workload of the lecturers has been reduced, and students’ performance in formulating SQL statements has been improved.</div

    A proposal for enhancing the motivation in students of computer programming

    Get PDF
    Computer programming is known to be one of the most difficult courses for students in the first year of engineering. They are faced with the challenge of abstract thinking and gaining programming skills for the first time. These skills are acquired by continuous practicing from the start of the course. In order to enhance the motivation and dynamism of the learning and assessment processes, we have proposed the use of three educational resources namely screencasts, self-assessment questionnaires and automated grading of assignments. These resources have been made available in Moodle which is a Learning Management System widely used in education environments and adopted by the Telecommunications Engineering School at the Universidad Politécnica de Madrid (UPM). Both teachers and students can enhance the learning and assessment processes through the use of new educational activities such as self-assessment questionnaires and automated grading of assignments. On the other hand, multimedia resources such as screencasts can guide students in complex topics. The resources proposed allow teachers to improve their tutorial actions since they provide immediate feedback and comments to students without the enormous effort of manual correction and evaluation by teachers specially taking into account the large number of students enrolled in the course. In this paper we present the case study where three proposed educational resources were applied. We describe the special features of the course and explain why the use of these resources can both enhance the students? motivation and improve the teaching and learning processes. Our research work was carried out on students attending the "Computer programming" course offered in the first year of a Telecommunications Engineering degree at UPM. This course is mandatory and has more than 450 enrolled students. Our purpose is to encourage the motivation and dynamism of the learning and assessment processes

    Marking complex assignments using peer assessment with an electronic voting system and an automated feedback tool

    Get PDF
    The work described in this paper relates to the development and use of a range of initiatives in order to mark complex masters' level assignments related to the development of computer web applications. In the past such assignments have proven difficult to mark since they assess a range of skills including programming, human computer interaction and design. Based on the experience of several years marking such assignments, the module delivery team decided to adopt an approach whereby the students marked each other's practical work using an electronic voting system (EVS). The results of this are presented in the paper along with statistical comparison with the tutors' marking, providing evidence for the efficacy of the approach. The second part of the assignment related to theory and documentation. This was marked by the tutors using an automated feedback tool. It was found that the time to mark the work was reduced by more than 30% in all cases compared to previous years. More importantly it was possible to provide good quality individual feedback to learners rapidly. Feedback was delivered to all within three weeks of the test submission datePeer reviewe
    • 

    corecore