199,939 research outputs found

    A comparison of integrated testlet and constructed-response question formats

    Full text link
    Constructed-response (CR) questions are a mainstay of introductory physics textbooks and exams. However, because of time, cost, and scoring reliability constraints associated with this format, CR questions are being increasingly replaced by multiple-choice (MC) questions in formal exams. The integrated testlet (IT) is a recently-developed question structure designed to provide a proxy of the pedagogical advantages of CR questions while procedurally functioning as set of MC questions. ITs utilize an answer-until-correct response format that provides immediate confirmatory or corrective feedback, and they thus allow not only for the granting of partial credit in cases of initially incorrect reasoning, but furthermore the ability to build cumulative question structures. Here, we report on a study that directly compares the functionality of ITs and CR questions in introductory physics exams. To do this, CR questions were converted to concept-equivalent ITs, and both sets of questions were deployed in midterm and final exams. We find that both question types provide adequate discrimination between stronger and weaker students, with CR questions discriminating slightly better than the ITs. Meanwhile, an analysis of inter-rater scoring of the CR questions raises serious concerns about the reliability of the granting of partial credit when this traditional assessment technique is used in a realistic (but non optimized) setting. Furthermore, we show evidence that partial credit is granted in a valid manner in the ITs. Thus, together with consideration of the vastly reduced costs of administering IT-based examinations compared to CR-based examinations, our findings indicate that ITs are viable replacements for CR questions in formal examinations where it is desirable to both assess concept integration and to reward partial knowledge, while efficiently scoring examinations.Comment: 14 pages, 3 figures, with appendix. Accepted for publication in PRST-PER (August 2014

    Context-aware Assessment Using QR-codes

    Get PDF
    In this paper we present the implementation of a general mechanism to deliver tests based on mobile devices and matrix codes. The system is an extension of Siette, and has not been specifically developed for any subject matter. To evaluate the performance of the system and show some of its capabilities, we have developed a test for a second-year college course on Botany at the School of Forestry Engineering. Students were equipped with iPads and took an outdoor test on plant species identification. All students were able to take and complete the test in a reasonable time. Opinions expressed anonymously by the students in a survey about the usability of the system and the usefulness of the test were very favorable. We think that the application presented in this paper can broaden the applicability of automatic assessment techniques.The presentation of this work has been co-founded by the Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech

    Deep Engagement as a Complex System:Identity, Learning Power and Authentic Enquiry

    Get PDF

    The role of pedagogical tools in active learning: a case for sense-making

    Full text link
    Evidence from the research literature indicates that both audience response systems (ARS) and guided inquiry worksheets (GIW) can lead to greater student engagement, learning, and equity in the STEM classroom. We compare the use of these two tools in large enrollment STEM courses delivered in different contexts, one in biology and one in engineering. The instructors studied utilized each of the active learning tools differently. In the biology course, ARS questions were used mainly to check in with students and assess if they were correctly interpreting and understanding worksheet questions. The engineering course presented ARS questions that afforded students the opportunity to apply learned concepts to new scenarios towards improving students conceptual understanding. In the biology course, the GIWs were primarily used in stand-alone activities, and most of the information necessary for students to answer the questions was contained within the worksheet in a context that aligned with a disciplinary model. In the engineering course, the instructor intended for students to reference their lecture notes and rely on their conceptual knowledge of fundamental principles from the previous ARS class session in order to successfully answer the GIW questions. However, while their specific implementation structures and practices differed, both instructors used these tools to build towards the same basic disciplinary thinking and sense-making processes of conceptual reasoning, quantitative reasoning, and metacognitive thinking.Comment: 20 pages, 5 figure

    Assessment @ Bond

    Get PDF
    corecore