107,974 research outputs found

    Using teacher inquiry to support technology-enhanced formative assessment: a review of the literature to inform a new method

    Get PDF
    In this paper we review the literature on teacher inquiry (TI) to explore the possibility that this process can equip teachers to investigate students’ learning as a step towards the process of formative assessment. We draw a distinction between formative assessment and summative forms of assessment [CRELL. (2009). The transition to computer-based assessment: New approaches to skills assessment and implications for large-scale testing. In F. Scheuermann & J. Björnsson (Eds.), JRC Scientific and technical reports. Ispra: Author; Webb, M. (2010). Beginning teacher education and collaborative formative e-assessment. Assessment & Evaluation in Higher Education, 35, 597–618; EACEA. (2009). National testing of pupils in Europe: Objectives, organisation and use of results. Brussels: Eurydice; OECD. (2010b). Assessing the effects of ICT in education (F. Scheuermann & E. PedrĂł, Eds.). Paris: JRC, OECD]. Our review of TI is combined with a review of the research concerning the way that practices with technology can support the assessment process. We conclude with a comparison of TI and teacher design research from which we extract the characteristics for a method of TI that can be used to develop technology-enhanced formative assessment: teacher inquiry into student learning. In this review, our primary focus is upon enabling teachers to use technology effectively to inquire about their students’ learning progress

    Designing and using adaptive tests for large scale formative assessment: 1999 to 2008

    Get PDF
    This presentation is concerned with the development and trialling of sets of adaptive computer-based and (non-adaptive) paper-based tools used primarily to assess the literacy, language and numeracy skills of adult learners in both academic and work settings in the UK. In 2002, the Skills for Life Unit of the Department for Education and Skills (DfES) commissioned AlphaPlus Consultancy Ltd and BTL Group Ltd to develop suites of assessment tools to be used to support learners and teachers embarked on its Skills for Life 'Learning Journey'. The DfES was particularly concerned to have in place a collection of reliable and valid paper-based and computer-based ‘tools’ for assessing learners during the early stages of their journey towards improved literacy, language and numeracy skills. After an intensive period of development and trialling, a collection of fifteen computer-based assessment tools and twenty-five paper-based assessment tools and associated guidance materials was delivered to the DfES, Skills for Life Unit in 2006. These tools have been used very extensively in hundreds of centres with thousands of learners for diagnostic and formative assessment purposes and are one of several outputs from an ongoing development of computer-based adaptive formative assessment tools. As this report will demonstrate, trials confirmed that the tools are both powerful and engaging for teachers and learners alike. Although the expectation was that most centres would use the computer versions of the tools, paper-based versions were also commissioned for use in situations where computers were unavailable or assessors considered them inappropriate

    Learning support for mature, part-time, evening students: providing feedback via frequent, computer-based assessments

    Get PDF
    A new module in our first year Biology curriculum was used as a vehicle to test strategies for improving learning support. To this end, we have administered frequent CBA, incorporating extensive feedback, both to pace the students’ study efforts and to pinpoint areas in which additional help from lecturers may be required. Three of the 7 CBA provided through the 15-week course were initially given as open-book summative tests, thus contributing to the overall mark for the module. Other CBA were formative: these included repeats of the summative CBA made available for revision purposes, as well as other CBA which focused mainly on aspects of the course that were summatively assessed by other means. A closedbook final exam, also computer-based, was given in the final week as a comprehensive assessment. We have evaluated the utility and effectiveness of our approach by surveying student opinion via questionnaires, examining patterns and extent of student use of formative assessments, and by analysing grades for the summative CBA. We have found the students’ perceptions of the approach to be largely positive and that the formative CBA were well-used, especially as revision aids for the final exam. Our analysis further indicates that the style of the assessments may have been especially helpful to students whose first language is not English

    Formative assessment strategies for students' conceptions—The potential of learning analytics

    Get PDF
    Formative assessment is considered to be helpful in students' learning support and teaching design. Following Aufschnaiter's and Alonzo's framework, formative assessment practices of teachers can be subdivided into three practices: eliciting evidence, interpreting evidence and responding. Since students' conceptions are judged to be important for meaningful learning across disciplines, teachers are required to assess their students' conceptions. The focus of this article lies on the discussion of learning analytics for supporting the assessment of students' conceptions in class. The existing and potential contributions of learning analytics are discussed related to the named formative assessment framework in order to enhance the teachers' options to consider individual students' conceptions. We refer to findings from biology and computer science education on existing assessment tools and identify limitations and potentials with respect to the assessment of students' conceptions. Practitioner notes What is already known about this topic Students' conceptions are considered to be important for learning processes, but interpreting evidence for learning with respect to students' conceptions is challenging for teachers. Assessment tools have been developed in different educational domains for teaching practice. Techniques from artificial intelligence and machine learning have been applied for automated assessment of specific aspects of learning. What does the paper add Findings on existing assessment tools from two educational domains are summarised and limitations with respect to assessment of students' conceptions are identified. Relevent data that needs to be analysed for insights into students' conceptions is identified from an educational perspective. Potential contributions of learning analytics to support the challenging task to elicit students' conceptions are discussed. Implications for practice and/or policy Learning analytics can enhance the eliciting of students' conceptions. Based on the analysis of existing works, further exploration and developments of analysis techniques for unstructured text and multimodal data are desirable to support the eliciting of students' conceptions

    A Systematic Review of Formative Assessment to Support Students Learning Computer Programming

    Get PDF
    Formative assessment aims to increase student understanding, instructor instruction, and learning by providing feedback on students\u27 progress. The goal of this systematic review is to discover trends on formative assessment techniques used to support computer programming learners by synthesizing literature published between 2013 and 2023. 17 articles that were peer-reviewed and published in journals were examined from the initial search of 197 studies. According to the findings, all the studies were conducted at the higher education level and only a small number at the secondary school level. Overall, most studies found that motivation, scaffolding, and engagement were the three main goals of feedback, with less research finding that metacognitive goals were the intended outcomes. The two techniques for facilitating formative feedback that were used most frequently were compiler or testing based error messages and customised error messages. The importance of formative feedback is highlighted in the reviewed articles, supporting the contention that assessments used in programming courses should place a heavy emphasis on motivating students to increase their level of proficiency. This study also suggests a formative assessment that employs an adaptive strategy to evaluate the ability level of the novice students and motivate them to learn programming to acquire the necessary knowledge

    Integrating CAA within the University of Ulster

    Get PDF
    Marking coursework is a time consuming activity, further exacerbated by the need for regular submission and timely, informative feedback. Increasing student numbers (particularly within computing and management) coupled with a decline in resources, mean that staff are unable to give formative feedback on student learning to the extent they may wish. Understandably there is concern that as the quantity of marking increases, there is a corresponding deterioration in the quality of assessment. This has led staff at the University of Ulster to investigate new ways to assess students. Computer Assisted Assessment (CAA) offers the opportunity to assess students more regularly without increasing staff workload. This paper outlines how WebCT has been used to support computer assisted assessment throughout the Faculty of Informatics and the Faculty of Business and Management at the University of Ulster

    A report on the piloting of a novel computer-based medical case simulation for teaching and formative assessment of diagnostic laboratory testing

    Get PDF
    Objectives: Insufficient attention has been given to how information from computer-based clinical case simulations is presented, collected, and scored. Research is needed on how best to design such simulations to acquire valid performance assessment data that can act as useful feedback for educational applications. This report describes a study of a new simulation format with design features aimed at improving both its formative assessment feedback and educational function. Methods: Case simulation software (LabCAPS) was developed to target a highly focused and well-defined measurement goal with a response format that allowed objective scoring. Data from an eight-case computer-based performance assessment administered in a pilot study to 13 second-year medical students was analyzed using classical test theory and generalizability analysis. In addition, a similar analysis was conducted on an administration in a less controlled setting, but to a much large sample (n=143), within a clinical course that utilized two random case subsets from a library of 18 cases. Results: Classical test theory case-level item analysis of the pilot assessment yielded an average case discrimination of 0.37, and all eight cases were positively discriminating (range=0.11–0.56). Classical test theory coefficient alpha and the decision study showed the eight-case performance assessment to have an observed reliability of σ=G=0.70. The decision study further demonstrated that a G=0.80 could be attained with approximately 3 h and 15 min of testing. The less-controlled educational application within a large medical class produced a somewhat lower reliability for eight cases (G=0.53). Students gave high ratings to the logic of the simulation interface, its educational value, and to the fidelity of the tasks. Conclusions: LabCAPS software shows the potential to provide formative assessment of medical students’ skill at diagnostic test ordering and to provide valid feedback to learners. The perceived fidelity of the performance tasks and the statistical reliability findings support the validity of using the automated scores for formative assessment and learning. LabCAPS cases appear well designed for use as a scored assignment, for stimulating discussions in small group educational settings, for self-assessment, and for independent learning. Extension of the more highly controlled pilot assessment study with a larger sample will be needed to confirm its reliability in other assessment applications

    Introducing and Using Electronic Voting Systems in a Large Scale Project With Undergraduate Students : Reflecting on the Challenges and Successes

    Get PDF
    Electronic Voting Systems (EVS) have become a popular medium for encouraging student engagement in class-based activities and for managing swift feedback in formative and summative assessments. Since their early days of popularity and introduction some five or more years ago, the author’s UK based University has been successful in refining strategies for their use across individual academic Schools and Departments, as previously reported at ECEL (e.g. Lorimer and Hilliard, 2008). The focus of this paper is a reflection on the introduction of EVS with 300 first year undergraduate students in the School of Computer Science, within the context of a wider ‘change’ project in teaching and learning affecting the whole institution. The author examines what lessons can be learnt following this rapid scaling up of EVS activity both at a local level and more widely across an HE institution and in reflecting on the successes and challenges of this experience provides key indicators for success and useful support for others considering using EVS. The paper first considers the landscape of EVS use within the UK and then the specific introduction of EVS at her own institution, before exploring the issues in her own academic School around the latest phase of their introduction as part of an institution–wide project to review measures to support assessment and feedback.Non peer reviewe
    • 

    corecore