225,577 research outputs found
Making Assessment Meaningful
Our investigation highlighted practical implications and barriers for implementing assessment for formative purpose. With larger student numbers it is becoming harder for academics to find the time to engage in formative assessment. It seems a shame that as class sizes grow it is at the cost of the learning experience in terms of formative feedback. So whilst our respondents showed a commitment to using assessment for formative purposes, practical reasons may prevent this from actually happening.
As an institution we also need to be looking at assessment timing. If assessment is to be formative, it needs to happen at a time when students can then act on feedback in a constructive way. We also need to be creating activities that allow students to engage with the feedback: simply handing students a page of written feedback will not encourage all students to act and learn. Creating discussion during teaching time, following assessment, for students to talk about the feedback will encourage them to read and reflect on any feedback.
It is clear that assessment for formative purpose is at the heart of most lecturers’ practice within London Metropolitan University, but now we need to place it firmly at the heart of the student experience, in a meaningful and real way
Towards a personal best : a case for introducing ipsative assessment in higher education
The central role that assessment plays is recognised in higher education, in particular how formative feedback guides learning. A model for effective feedback practice is used to argue that, in current schemes, formative feedback is often not usable because it is strongly linked to external criteria and standards, rather than to the processes of learning. By contrast, ipsative feedback, which is based on a comparison with the learner's previous performance and linked to longterm progress, is likely to be usable and may have additional motivational effects. After recommending a move towards ipsative formative assessment, a further step would be ipsative grading. However, such a radical shift towards a fully ipsative regime might pose new problems and these are discussed. The article explores a compromise of a combined assessment regime. The rewards for learners are potentially high, and the article concludes that ipsative assessment is well worth further investigation. © 2011 Society for Research into Higher Education
Experiences of using student workbooks for formative and summative assessment
In response to poor student attainment rates, the teaching, learning and assessment strategy of a Level 1 circuit theory unit has been revised to emphasise the importance of regular attendance at teaching sessions, and also to provide regular formative feedback. As part of the assessment scheme a tutorial workbook has been used for both formative and summative assessment. The workbook is assessed regularly during scheduled teaching sessions. The use of objective questions has reduced the time taken to assess the work, while the regular assessments help with student motivation, provide formative feedback, and help students to structure and pace their learning
Stability and sensitivity of Learning Analytics based prediction models
Learning analytics seek to enhance the learning processes through systematic measurements of learning related data and to provide informative feedback to learners and educators. Track data from Learning Management Systems (LMS) constitute a main data source for learning analytics. This empirical contribution provides an application of Buckingham Shum and Deakin Crick’s theoretical framework of dispositional learning analytics: an infrastructure that combines learning dispositions data with data extracted from computer-assisted, formative assessments and LMSs. In two cohorts of a large introductory quantitative methods module, 2049 students were enrolled in a module based on principles of blended learning, combining face-to-face Problem-Based Learning sessions with e-tutorials. We investigated the predictive power of learning dispositions, outcomes of continuous formative assessments and other system generated data in modelling student performance and their potential to generate informative feedback. Using a dynamic, longitudinal perspective, computer-assisted formative assessments seem to be the best predictor for detecting underperforming students and academic performance, while basic LMS data did not substantially predict learning. If timely feedback is crucial, both use-intensity related track data from e-tutorial systems, and learning dispositions, are valuable sources for feedback generation
Feeding back to feed forward:formative assessment as a platform for effective learning
Students construct meaning through relevant learning activities (Biggs, 2003) which are largely determined by the type, amount, and timing of feedback (Carless, 2006). The aim of the present study was to develop a greater awareness and understanding of formative assessment and feedback practices and their relationship with learning. During 2011 five focus group discussions were undertaken with students and academic staff involved with a range of modules and degree pathways at a UK University. Three of the focus groups were with undergraduate students (one at each level of study), and one was with taught postgraduate students. Discussions focussed on integration of formative assessment and feedback into modules, as well as an exploration of the effectiveness of feedback on future learning. The findings revealed that in order to emphasise continuous learning – feeding back to feed forward (Rushton, 2005) – and to encourage self-regulated learning (Nicol & Macfarlane-Dick, 2006), students need to have opportunities to make mistakes and to learn from them prior to summative assessment (through formative assessment and feedback). There was also firm evidence of different approaches to learning, emphasising in particular the transitional importance of the first year of study as the foundation upon which future achievement is built
What Motivates Students to Provide Feedback to Teachers About Teaching and Learning? An Expectancy Theory Perspective
The purpose of this empirical research study was to investigate what motivates students to provide formative anonymous feedback to teachers regarding their perceptions of the teaching and learning experience in order to improve student learning. Expectancy theory, specifically Vroom’s Model, was used as the conceptual framework for the study. Multiple regression analysis was employed to test both the valence and force equations. Statistically significant results indicated that students’ motivation was dependent upon the importance to them of improving the value of the class and of future classes, and the expectation that their formative feedback would lead to increased value for them, their peers in the classroom and for students in future classes. Based on these findings, it is important for teachers who request students to participate in providing anonymous feedback to emphasize that this feedback is a valuable tool to assist in improving current and future teaching and learning experiences
Reliability in the assessment of program quality by teaching assistants during code reviews
It is of paramount importance that formative feedback is meaningful in order to drive student learning. Achieving this, however, relies upon a clear and constructively aligned model of quality being applied consistently across submissions. This poster presentation raises concerns about the inter-rater reliability of code reviews conducted by teaching assistants in the absence of such a model. Five teaching assistants each reviewed 12 purposely selected programs submitted by introductory programming students. An analysis of their reliability revealed that while teaching assistants were self-consistent, they each assessed code quality in different ways. This suggests a need for standard models of program quality and rubrics, alongside supporting technology, to be used during code reviews to improve the reliability of formative feedback
TOWARDS FORMATIVE E-ASSESSMENT IN PROJECT MANAGEMENT THROUGH PERSONALIZED AUTOMATED FEEDBACK
Formative e-assessment is a complex process, in which learners can build their knowledge, fill up their knowledge gaps or increase their learning abilities. The feedback mechanism is considered to be highly important for the formative dimension of e-assessment. Current paper proposes a model for automated feedback in a project management e-assessment environment: the model blends a built-in feedback sheet (a document containing the correct answers) with a recommender engine, which searches the web for references related to the incorrectly answered questions. The feedback model is personalized, because the web search is made taking into account the user profile: the list of concepts which weren’t correctly understood. This list of concepts is mapped on project management domain ontology.e-assessment, project management, automated feedback, ontology, knowledge system
Using language technologies to support individual formative feedback
In modern educational environments for group learning it is often challenging for tutors to provide timely individual formative feedback to learners. Taking the case of undergraduate Medicine, we have found that formative feedback is generally provided to learners on an ad-hoc basis, usually at the group, rather than individual, level. Consequently, conceptual issues for individuals often remain undetected until summative assessment. In many subject domains, learners will typically produce written materials to record their study activities. One way for tutors to diagnose conceptual development issues for an individual learner would be to analyse the contents of the learning materials they produce, which would be a significant undertaking.
CONSPECT is one of six core web-based services of the Language Technologies for Lifelong Learning (LTfLL) project. This European Union Framework 7-funded project seeks to make use of Language Technologies to provide semi-automated analysis of the large quantities of text generated by learners through the course of their learning. CONSPECT aims to provide formative feedback and monitoring of learners’ conceptual development. It uses a Natural Language Processing method, based on Latent Semantic Analysis, to compare learner materials to reference models generated from reference or learning materials.
This paper provides a summary of the service development alongside results from validation of Version 1.0 of the service
- …
