10 research outputs found

    E-assessment: past, present and future

    No full text
    This review of e-assessment takes a broad definition, including any use of a computer in assessment, whilst focusing on computer-marked assessment. Drivers include increased variety of assessed tasks and the provision of instantaneous feedback, as well as increased objectivity and resource saving. From the early use of multiple-choice questions and machine-readable forms, computer-marked assessment has developed to encompass sophisticated online systems, which may incorporate interoperability and be used in students’ own homes. Systems have been developed by universities, companies and as part of virtual learning environments. Some of the disadvantages of selected-response question types can be alleviated by techniques such as confidence-based marking. The use of electronic response systems (‘clickers’) in classrooms can be effective, especially when coupled with peer discussion. Student authoring of questions can also encourage dialogue around learning. More sophisticated computer-marked assessment systems have enabled mathematical questions to be broken down into steps and have provided targeted and increasing feedback. Systems that use computer algebra and provide answer matching for short-answer questions are discussed. Computer-adaptive tests use a student’s response to previous questions to alter the subsequent form of the test. More generally, e-assessment includes the use of peer-assessment and assessed e-portfolios, blogs, wikis and forums. Predictions for the future include the use of e-assessment in MOOCs (massive open online courses); the use of learning analytics; a blurring of the boundaries between teaching, assessment and learning; and the use of e-assessment to free human markers to assess what they can assess more authentically

    Student Experiences with a Bring Your Own Laptop e-Exam System in Pre-university College

    No full text
    Part 7: Self-assessment, e-Assessment and e-ExaminationsInternational audienceThis study investigated students’ perceptions of a bring-your-own (BYO) laptop based e-Examination system used in trials conducted at an Australian pre-university college in 2016 and 2017. The trials were conducted in two different subjects, in geography and globalisation. Data were gathered using pre-post surveys (n = 128) that comprised qualitative comments and Likert items. Students’ perceptions were gathered relating to the ease of use of the e-Examination system, technical reliability, suitability of the assessment task to computerisation and the logistical aspects of the examination process. Many of the typists were taking a computerised supervised test for the first time. A divergence of opinions between those that typed and those that hand-wrote regarding students’ future use intentions became more prominent following the examination event

    Writing e-Exams in Pre-University College

    No full text
    Part 7: Self-assessment, e-Assessment and e-ExaminationsInternational audienceThis study examined students’ expressed strategies, habits and preferences with respect to responding to supervised text-based assessments. Two trials of a computerised examination system took place in an Australian pre-university college in 2016 and 2017. Students in several classes studying geography and globalisation completed a sequence of practice and assessed work. Data were collected using pre- and post-surveys about their preferred writing styles, habits and strategies in light of their choice to type or handwrite essay and short answer examinations. Comparisons were made between those that elected to handwrite and those who chose to type the examination, with several areas being significant. The performance (grades), production (word count) of the typists and hand-writers were also correlated and compared

    Designing for Learner Engagement with Computer-Based Testing

    Get PDF
    The issues influencing student engagement with high-stakes computer-based exams were investigated, drawing on feedback from two cohorts of international MA Education students encountering this assessment method for the first time. Qualitative data from surveys and focus groups on the students’ examination experience were analysed, leading to the identification of engagement issues in the delivery of high-stakes computer-based assessments.The exam combined short-answer open-response questions with multiple-choice-style items to assess knowledge and understanding of research methods. The findings suggest that engagement with computer-based testing depends, to a lesser extent, on students’ general levels of digital literacy and, to a greater extent, on their information technology (IT) proficiency for assessment and their ability to adapt their test-taking strategies, including organisational and cognitive strategies, to the online assessment environment. The socialisation and preparation of students for computer-based testing therefore emerge as key responsibilities for instructors to address, with students requesting increased opportunities for practice and training to develop the IT skills and test-taking strategies necessary to succeed in computer-based examinations. These findings and their implications in terms of instructional responsibilities form the basis of a proposal for a framework for Learner Engagement with e-Assessment Practices
    corecore