97,513 research outputs found

    Multiple Choice and Constructed Response Tests: Do Test Format and Scoring Matter?

    Get PDF
    Problem Statement: Nowadays, multiple choice (MC) tests are very common, and replace many constructed response (CR) tests. However, literature reveals that there is no consensus whether both test formats are equally suitable for measuring students' ability or knowledge. This might be due to the fact that neither the type of MC question nor the scoring rule used when comparing test formats are mentioned. Hence, educators do not have any guidelines which test format or scoring rule is appropriate. Purpose of Study: The study focuses on the comparison of CR and MC tests. More precisely, short answer questions are contrasted to equivalent MC questions with multiple responses which are graded with three different scoring rules. Research Methods: An experiment was conducted based on three instruments: A CR and a MC test using a similar stem to assure that the questions are of an equivalent level of difficulty. This procedure enables the comparison of the scores students gained in the two forms of examination. Additionally, a questionnaire was handed out for further insights into students' learning strategy, test preference, motivation, and demographics. In contrast to previous studies the present study applies the many-facet Rasch measurement approach for analyzing data which allows improving the reliability of an assessment and applying small datasets. Findings: Results indicate that CR tests are equal to MC tests with multiple responses if Number Correct (NC) scoring is used. An explanation seems straight forward since the grader of the CR tests did not penalize wrong answers and rewarded partially correct answers. This means that s/he uses the same logic as NC scoring. All other scoring methods such as the All or-Nothing or University-Specific rule neither reward partial knowledge nor penalize guessing. Therefore, these methods are found to be stricter than NC scoring or CR tests and cannot be used interchangeably. Conclusions: CR tests can be replaced by MC tests with multiple responses if NC scoring is used, due to the fact that the multiple response format measures more complex thinking skills than conventional MC questions. Hence, educators can take advantage of low grading costs, consistent grading, no scoring biases, and greater coverage of the syllabus while students benefit from timely feedback. (authors' abstract

    E-assessment: Past, present and future

    Get PDF
    This review of e-assessment takes a broad definition, including any use of a computer in assessment, whilst focusing on computer-marked assessment. Drivers include increased variety of assessed tasks and the provision of instantaneous feedback, as well as increased objectivity and resource saving. From the early use of multiple-choice questions and machine-readable forms, computer-marked assessment has developed to encompass sophisticated online systems, which may incorporate interoperability and be used in students’ own homes. Systems have been developed by universities, companies and as part of virtual learning environments. Some of the disadvantages of selected-response question types can be alleviated by techniques such as confidence-based marking. The use of electronic response systems (‘clickers’) in classrooms can be effective, especially when coupled with peer discussion. Student authoring of questions can also encourage dialogue around learning. More sophisticated computer-marked assessment systems have enabled mathematical questions to be broken down into steps and have provided targeted and increasing feedback. Systems that use computer algebra and provide answer matching for short-answer questions are discussed. Computer-adaptive tests use a student’s response to previous questions to alter the subsequent form of the test. More generally, e-assessment includes the use of peer-assessment and assessed e-portfolios, blogs, wikis and forums. Predictions for the future include the use of e-assessment in MOOCs (massive open online courses); the use of learning analytics; a blurring of the boundaries between teaching, assessment and learning; and the use of e-assessment to free human markers to assess what they can assess more authentically

    Psychometrics in Practice at RCEC

    Get PDF
    A broad range of topics is dealt with in this volume: from combining the psychometric generalizability and item response theories to the ideas for an integrated formative use of data-driven decision making, assessment for learning and diagnostic testing. A number of chapters pay attention to computerized (adaptive) and classification testing. Other chapters treat the quality of testing in a general sense, but for topics like maintaining standards or the testing of writing ability, the quality of testing is dealt with more specifically.\ud All authors are connected to RCEC as researchers. They present one of their current research topics and provide some insight into the focus of RCEC. The selection of the topics and the editing intends that the book should be of special interest to educational researchers, psychometricians and practitioners in educational assessment

    Practice and Assessment of Reading Classes Using Moodle

    Get PDF
    This research paper details the extensive use of Computer Assisted Language Learning (CALL) for a content-based reading syllabus at Gunma University, through the software program Moodle (Modular Object-Oriented Dynamic Learning Environment ), a free and open-source software learning management system used at Gunma University.   The research basis of this paper is within the sphere of Action Research , as a valuable professional development tool (Nunan, 2001) based on this researcher’s perceived valuation of the system and how it could better aid students to perform better in and be more motivated towards their English language and reading studies, introduce new technological skills and abilities, and aid teachers in better preparation, teaching and assessment of reading classes. Moodle enthuses that the Lesson Module ‘enables a teacher to deliver content and/or practice activities in interesting and flexible ways...teachers can choose to increase engagement and ensure understanding by including a variety of questions, such as multiple choice, matching and short answer.’ (Moodle, 2016). Therefore, this paper will ascertain whether the syllabus achieved a greater engagement and enjoyment by the students, and ensured better comprehension and understanding of key tasks and instructions. In addition, it will detail how teachers can benefit course management by employing such technology within the classroom

    Assessment @ Bond

    Get PDF

    The influence of online problem-based learning on teachers' professional practice and identity

    Get PDF
    In this paper we describe the design of a managed learning environment called MTutor, which is used to teach an online Masters Module for teachers. In describing the design of MTutor pedagogic issues of problem-based learning, situated cognition and ill-structured problems are discussed. MTutor presents teachers with complex real-life teaching problems, which they are required to solve online through collaboration with other teachers. In order to explore the influence of this online learning experience on the identity and practice of teachers, we present the results from a small-scale study in which six students were interviewed about their online experiences. We conclude that, within the sample, students' engagement with online problem-based learning within their community of practice positively influenced their professional practice styles, but that there is little evidence to suggest that online identity influences real-life practice

    How Well Do Multiple Choice Tests Evaluate Student Understanding in Computer Programming Classes?

    Get PDF
    Despite the wide diversity of formats with which to construct class examinations, there are many reasons why both university students and instructors prefer multiple-choice tests over other types of exam questions. The purpose of the present study was to examine this multiple-choice/constructed-response debate within the context of teaching computer programming classes. This paper reports the analysis of over 150 test scores of students who were given both multiple-choice and short-answer questions on the same midterm examination. We found that, while student performance on these different types of questions was statistically correlated, the scores on the coding questions explained less than half the variability in the scores on the multiple choice questions. Gender, graduate status, and university major were not significant. This paper also provides some caveats in interpreting our results, suggests some extensions to the present work, and perhaps most importantly in light of the uncovered weak statistical relationship, addresses the question of whether multiple-choice tests are “good enough.
    corecore