97,513 research outputs found
Recommended from our members
Using digital pens to expedite the marking procedure
This is the Post-print version of the Article. The official published version can be accessed from the link below - Copyright @ 2010 Inderscience PublishersDigital pens have been introduced over the last six years and have demonstrated that they can be used effectively for collecting, processing and storing data. These properties make them ideal for use in education, particularly in the marking procedure of multiple-choice questions (MCQ). In this report, we present a system that was designed to expedite the marking procedure of MCQ, for use at any educational level. The main element of the system is a digital pen, i.e. given to the students prior to the examination. On return of the pen, the system immediately recognises the students' answers and produces their results. In this specific research, four groups of students were studied and a variety of data were collected, concerning issues, such as accuracy, time gained by the use of the system and the impressions of the students. The pedagogic value of the use of the system is also presented
Recommended from our members
A comparison of human and computerized proctoring within Keller\u27s personalized system of instruction.
Multiple Choice and Constructed Response Tests: Do Test Format and Scoring Matter?
Problem Statement: Nowadays, multiple choice (MC) tests are very common, and replace many constructed response (CR)
tests. However, literature reveals that there is no consensus whether both test formats are equally suitable for measuring students' ability or knowledge. This might be due to the fact that neither the type of MC question nor the scoring rule used when comparing test formats are mentioned. Hence, educators do not have any guidelines which test format or scoring rule is appropriate.
Purpose of Study: The study focuses on the comparison of CR and MC tests. More precisely, short answer questions are
contrasted to equivalent MC questions with multiple responses which are graded with three different scoring rules.
Research Methods: An experiment was conducted based on three instruments: A CR and a MC test using a similar stem to assure that the questions are of an equivalent level of difficulty. This procedure enables the comparison of the scores students gained in the two forms of examination. Additionally, a questionnaire was handed out for further insights into students' learning strategy, test preference, motivation, and demographics. In contrast to previous studies the present study applies the
many-facet Rasch measurement approach for analyzing data which allows improving the reliability of an assessment and
applying small datasets.
Findings: Results indicate that CR tests are equal to MC tests with multiple responses if Number Correct (NC) scoring is used. An explanation seems straight forward since the grader of the CR tests did not penalize wrong answers and rewarded partially correct answers. This means that s/he uses the same logic as NC scoring. All other scoring methods such as the All or-Nothing or University-Specific rule neither reward partial knowledge nor penalize guessing. Therefore, these methods are found to be stricter than NC scoring or CR tests and cannot be used interchangeably.
Conclusions: CR tests can be replaced by MC tests with multiple responses if NC scoring is used, due to the fact that the multiple response format measures more complex thinking skills than conventional MC questions. Hence, educators can take advantage of low grading costs, consistent grading, no scoring biases, and greater coverage of the syllabus while students benefit from timely feedback. (authors' abstract
E-assessment: Past, present and future
This review of e-assessment takes a broad definition, including any use of a computer in assessment, whilst focusing on computer-marked assessment. Drivers include increased variety of assessed tasks and the provision of instantaneous feedback, as well as increased objectivity and resource saving. From the early use of multiple-choice questions and machine-readable forms, computer-marked assessment has developed to encompass sophisticated online systems, which may incorporate interoperability and be used in students’ own homes. Systems have been developed by universities, companies and as part of virtual learning environments. Some of the disadvantages of selected-response question types can be alleviated by techniques such as confidence-based marking. The use of electronic response systems (‘clickers’) in classrooms can be effective, especially when coupled with peer discussion. Student authoring of questions can also encourage dialogue around learning. More sophisticated computer-marked assessment systems have enabled mathematical questions to be broken down into steps and have provided targeted and increasing feedback. Systems that use computer algebra and provide answer matching for short-answer questions are discussed. Computer-adaptive tests use a student’s response to previous questions to alter the subsequent form of the test. More generally, e-assessment includes the use of peer-assessment and assessed e-portfolios, blogs, wikis and forums. Predictions for the future include the use of e-assessment in MOOCs (massive open online courses); the use of learning analytics; a blurring of the boundaries between teaching, assessment and learning; and the use of e-assessment to free human markers to assess what they can assess more authentically
Psychometrics in Practice at RCEC
A broad range of topics is dealt with in this volume: from combining the psychometric generalizability and item response theories to the ideas for an integrated formative use of data-driven decision making, assessment for learning and diagnostic testing. A number of chapters pay attention to computerized (adaptive) and classification testing. Other chapters treat the quality of testing in a general sense, but for topics like maintaining standards or the testing of writing ability, the quality of testing is dealt with more specifically.\ud
All authors are connected to RCEC as researchers. They present one of their current research topics and provide some insight into the focus of RCEC. The selection of the topics and the editing intends that the book should be of special interest to educational researchers, psychometricians and practitioners in educational assessment
Practice and Assessment of Reading Classes Using Moodle
This research paper details the extensive use of Computer Assisted Language Learning (CALL)
for a content-based reading syllabus at Gunma University, through the software program Moodle
(Modular Object-Oriented Dynamic Learning Environment ), a free and open-source software learning
management system used at Gunma University.
The research basis of this paper is within the sphere of Action Research , as a valuable professional
development tool (Nunan, 2001) based on this researcher’s perceived valuation of the system and how it
could better aid students to perform better in and be more motivated towards their English language and
reading studies, introduce new technological skills and abilities, and aid teachers in better preparation,
teaching and assessment of reading classes. Moodle enthuses that the Lesson Module ‘enables a teacher
to deliver content and/or practice activities in interesting and flexible ways...teachers can choose to
increase engagement and ensure understanding by including a variety of questions, such as multiple choice,
matching and short answer.’ (Moodle, 2016). Therefore, this paper will ascertain whether the syllabus
achieved a greater engagement and enjoyment by the students, and ensured better comprehension and
understanding of key tasks and instructions. In addition, it will detail how teachers can benefit course
management by employing such technology within the classroom
The influence of online problem-based learning on teachers' professional practice and identity
In this paper we describe the design of a managed learning environment called MTutor, which is used to teach an online Masters Module for teachers. In describing the design of MTutor pedagogic issues of problem-based learning, situated cognition and ill-structured problems are discussed. MTutor presents teachers with complex real-life teaching problems, which they are required to solve online through collaboration with other teachers. In order to explore the influence of this online learning experience on the identity and practice of teachers, we present the results from a small-scale study in which six students were interviewed about their online experiences. We conclude that, within the sample, students' engagement with online problem-based learning within their community of practice positively influenced their professional practice styles, but that there is little evidence to suggest that online identity influences real-life practice
How Well Do Multiple Choice Tests Evaluate Student Understanding in Computer Programming Classes?
Despite the wide diversity of formats with which to construct class examinations, there are many reasons why both university students and instructors prefer multiple-choice tests over other types of exam questions. The purpose of the present study was to examine this multiple-choice/constructed-response debate within the context of teaching computer programming classes. This paper reports the analysis of over 150 test scores of students who were given both multiple-choice and short-answer questions on the same midterm examination. We found that, while student performance on these different types of questions was statistically correlated, the scores on the coding questions explained less than half the variability in the scores on the multiple choice questions. Gender, graduate status, and university major were not significant. This paper also provides some caveats in interpreting our results, suggests some extensions to the present work, and perhaps most importantly in light of the uncovered weak statistical relationship, addresses the question of whether multiple-choice tests are “good enough.
Recommended from our members
E-assessment for learning? Exploring the potential of computer-marked assessment and computer-generated feedback, from short-answer questions to assessment analytics.
This submission draws on research from twelve publications, all addressing some aspect of the broad research question: “Can interactive computer-marked assessment improve the effectiveness of assessment for learning?”
The work starts from a consideration of the conditions under which assessment of any sort is predicted to best support learning, and reviews the broader literature of assessment and feedback before considering the potential of computer-based assessment, focusing on relatively sophisticated constructed-response questions, and on the impact of instantaneous, tailored and increasing feedback. A range of qualitative and quantitative research methodologies are used to investigate factors which influence the engagement of distance learners of science with computer-marked assessment and computer-generated feedback.
It is concluded that the strongest influence on engagement is the student’s understanding of what they are required to do, including their understanding of the wording of assessment tasks and feedback. Clarity of wording is thus important, as is an iterative design process that allows for improvements to be made. Factors such as cut-off dates can have considerable impact, pointing to the importance of good overall assessment design, and more generally to the power and responsibility that lie in the hands of remote developers of online assessment and teaching.
Four of the publications describe research into the marking accuracy and effectiveness of questions to which students give their answer as a short phrase or sentence. Relatively simple pattern-matching software has been shown to give marking accuracy at least as good as that of human markers and more sophisticated computer-marked systems, provided questions are developed on the basis of responses from students at a similar level. However, educators continue to use selected-response questions in preference to constructed-response questions, despite concerns over the validity and authenticity of selected-response questions. Factors contributing to the low take-up of more sophisticated computer-marked tasks are discussed.
E-assessment also has the potential to improve the learning experience indirectly by providing information to educators about student engagement and student errors, at either the cohort or individual student level. The effectiveness of these “assessment analytics” is also considered, concluding that they have the potential to provide deep general insight and an early warning of at-risk students
- …