4,724 research outputs found

    Implementation of computer assisted assessment: lessons from the literature

    Get PDF
    This paper draws attention to literature surrounding the subject of computer-assisted assessment (CAA). A brief overview of traditional methods of assessment is presented, highlighting areas of concern in existing techniques. CAA is then defined, and instances of its introduction in various educational spheres are identified, with the main focus of the paper concerning the implementation of CAA. Through referenced articles, evidence is offered to inform practitioners, and direct further research into CAA from a technological and pedagogical perspective. This includes issues relating to interoperability of questions, security, test construction and testing higher cognitive skills. The paper concludes by suggesting that an institutional strategy for CAA coupled with staff development in test construction for a CAA environment can increase the chances of successful implementation

    Allowing for guessing and for the expectations from the learning outcomes in computer-based assessments

    Get PDF
    Computer-based assessments usually generate a percentage mark. It is not self-evident how this relates to the final percentage mark or final grade for the work since this depends on (i) its relationship to the "baseline" mark expected for someone who only guesses, (ii) to the "expectations" for the piece of work in relation to the learning objectives and (iii) the grading scheme employed. For some question types it is possible to allow for guessing within the marking scheme for the question using negative marking but in general it is preferable to correct for guessing within a post-test grading scheme that allows for guessing. The relationship between the assessment learning objectives and essays where choice is available and topics can be avoided compared with computer-based assessments where no choice is available and topics cannot be avoided is considered. It is concluded that commonly maximum performance should not be set at a mark of 100% but that an allowance should be made for the maximum expected performance based on the learning objectives. The use of formulae in a spreadsheet to convert the marks into grades based on a statistical allowance for guessing or additionally allowing for the maximum expected mark is demonstrated. A spreadsheet pro forma containing all of the formulae for adjusting marks and determining grades can be obtained by selecting “Grading” from the menu at http://students.luton.ac.uk/biology/webol/

    Investigating students seriousness during selected conceptual inventory surveys

    Full text link
    Conceptual inventory surveys are routinely used in education research to identify student learning needs and assess instructional practices. Students might not fully engage with these instruments because of the low stakes attached to them. This paper explores tests that can be used to estimate the percentage of students in a population who might not have taken such surveys seriously. These three seriousness tests are the pattern recognition test, the easy questions test, and the uncommon answers test. These three tests are applied to sets of students who were assessed either by the Force Concept Inventory, the Conceptual Survey of Electricity and Magnetism, or the Brief Electricity and Magnetism Assessment. The results of our investigation are compared to computer simulated populations of random answers.Comment: 8 pages; submitted to Phys Rev PE

    Exploration of a Confidence-Based Assessment Tool within an Aviation Training Program

    Get PDF
    Traditional use of multiple-choice questions reward a student for guessing. This technique encourages rote memorization of questions to pass a lengthy exam, and does not promote comprehensive understanding or subject correlation. In an effort to identify guessing on answers during an exam within a safety-critical aviation pilot training course, a qualitative research study was undertaken that introduced a confidence-based element to the end-of-ground-school exam. Confidence-based assessments consist of students’ self-reported level of certainty in their responses, indicating which answers they believe are correct while also indicating how confident they feel with their selections. The research goals were to clearly identify correct, or misinformed, guesses, and also provide an evidence-based snapshot of aircraft systems knowledge to be used as a formative study aid for the remainder of the course. Pilot and instructor interviews were conducted to gather perceptions and opinions about the effectiveness of the confidence-based assessment tool. The findings from the interviews found an overall positive experience and confirmed that the confidence-based assessments were successfully used as intended, to identify weak knowledge areas and as study aids for their remaining study time or during pre-simulator briefing sessions. The study found that if properly trained and administered, a robust confidence-based assessment tool would be minimally-burdensome while offering worthwhile benefits

    Quantifying critical thinking: Development and validation of the Physics Lab Inventory of Critical thinking (PLIC)

    Full text link
    Introductory physics lab instruction is undergoing a transformation, with increasing emphasis on developing experimentation and critical thinking skills. These changes present a need for standardized assessment instruments to determine the degree to which students develop these skills through instructional labs. In this article, we present the development and validation of the Physics Lab Inventory of Critical thinking (PLIC). We define critical thinking as the ability to use data and evidence to decide what to trust and what to do. The PLIC is a 10-question, closed-response assessment that probes student critical thinking skills in the context of physics experimentation. Using interviews and data from 5584 students at 29 institutions, we demonstrate, through qualitative and quantitative means, the validity and reliability of the instrument at measuring student critical thinking skills. This establishes a valuable new assessment instrument for instructional labs.Comment: 16 pages, 4 figure

    Initial correction versus negative marking in multiple choice examinations

    Get PDF
    Optimal assessment tools should measure in a limited time the knowledge of students in a correct and unbiased way. Amethod for automating the scoring is multiple choice scoring. This article compares scoring methods from a probabilistic point of view by modelling the probability to pass: the number right scoring, the initial correction (IC) and the negative marking (NM) method. We will compare the probabilities for students to pass when their assessment is translated into a score by means of the NM and the IC method. Moreover, given a knowledge level of the student, the variance of this probability will be discussed for both methods

    Psychometrics in Practice at RCEC

    Get PDF
    A broad range of topics is dealt with in this volume: from combining the psychometric generalizability and item response theories to the ideas for an integrated formative use of data-driven decision making, assessment for learning and diagnostic testing. A number of chapters pay attention to computerized (adaptive) and classification testing. Other chapters treat the quality of testing in a general sense, but for topics like maintaining standards or the testing of writing ability, the quality of testing is dealt with more specifically.\ud All authors are connected to RCEC as researchers. They present one of their current research topics and provide some insight into the focus of RCEC. The selection of the topics and the editing intends that the book should be of special interest to educational researchers, psychometricians and practitioners in educational assessment
    • 

    corecore