International Journal of e-Assessment (IJEA)
Not a member yet
    64 research outputs found

    Should and could undergraduate mathematics coursework be automated in the UK?

    No full text
    It is over ten years since the first large scale computer algebra assessment packages were deployed for undergraduate mathematics coursework at UK universities via projects like CALM and HELM at Heriot-Watt and Loughborough Universities. Yet, to the average mathematics lecturer who is a non-expert programmer but potentially interested in introducing such methods into their teaching there still loom a number of question marks over whether these packages are effective at promoting learning at this level, and whether the necessary skills required to implement, organise and deliver such assessments are worth the time outlay and investment in training academic, postgraduate and support staff. As higher education faces new market pressures in the UK we present an overview of the functionality of these computer algebra assessment methods for undergraduate mathematics in the context of the QAA and HEFCE guidelines for undergraduate mathematics assessment design, present survey and grade data from a cohort of undergraduate mathematics students assessed in this way at Leeds University, and question whether the simplest route forward for the UK university sector is simply to commercialise such teaching innovation

    Afterword

    No full text

    Marking complex assignments using peer assessment with an electronic voting system and an automated feedback tool

    No full text
    The work described in this paper relates to the development and use of a range of initiatives in order to mark complex masters' level assignments related to the development of computer web applications. In the past such assignments have proven difficult to mark since they assess a range of skills including programming, human computer interaction and design. Based on the experience of several years marking such assignments, the module delivery team decided to adopt an approach whereby the students marked each other's practical work using an electronic voting system (EVS). The results of this are presented in the paper along with statistical comparison with the tutors' marking, providing evidence for the efficacy of the approach. The second part of the assignment related to theory and documentation. This was marked by the tutors using an automated feedback tool. It was found that the time to mark the work was reduced by more than 30% in all cases compared to previous years. More importantly it was possible to provide good quality individual feedback to learners rapidly. Feedback was delivered to all within three weeks of the test submission date

    Extended essay marking: Does the transition from paper to screen influence examiners’ cognitive workload?

    No full text
    In the UK and elsewhere, awarding bodies are increasingly requiring examiners to mark examination scripts on screen rather than on paper. Research into the consequences of this marking mode transition has shown that examiners are able to mark essays with equal accuracy on screen as on paper. However, this research has raised important questions about how the mode of marking might influence examiner marking processes, particularly for extended essay responses. In reply to these questions, this study explored whether the mode in which extended essays are marked influences the cognitive workload experienced by examiners when marking. This study collected data from 11 experienced examiners working within a large UK-based awarding body. These examiners were each required to mark a sample of 90 Advanced GCE essays on paper and a matched sample of 90 essays on screen. Midway through their marking in each marking mode, the NASA-TLX measurement instrument was used to gather quantitative data about each examiner's experience of cognitive workload. This data was supplemented with qualitative data derived from semi-structured examiner follow-up interviews. Findings from these sources revealed that the examiners experienced significantly greater cognitive workload while marking on screen than while marking on paper, raising important implications in terms of the training, preparation and support offered to examiners across the marking mode transition

    0

    full texts

    64

    metadata records
    Updated in last 30 days.
    International Journal of e-Assessment (IJEA) is based in United Kingdom
    Access Repository Dashboard
    Do you manage Open Research Online? Become a CORE Member to access insider analytics, issue reports and manage access to outputs from your repository in the CORE Repository Dashboard! 👇