5 research outputs found
Recommended from our members
Will a Short Training Session Improve Multiple-Choice Item-Writing Quality by Dental School Faculty? A Pilot Study.
Faculty members are expected to write high-quality multiple-choice questions (MCQs) in order to accurately assess dental students' achievement. However, most dental school faculty members are not trained to write MCQs. Extensive faculty development programs have been used to help educators write better test items. The aim of this pilot study was to determine if a short workshop would result in improved MCQ item-writing by dental school faculty at one U.S. dental school. A total of 24 dental school faculty members who had previously written MCQs were randomized into a no-intervention group and an intervention group in 2015. Six previously written MCQs were randomly selected from each of the faculty members and given an item quality score. The intervention group participated in a training session of one-hour duration that focused on reviewing standard item-writing guidelines to improve in-house MCQs. The no-intervention group did not receive any training but did receive encouragement and an explanation of why good MCQ writing was important. The faculty members were then asked to revise their previously written questions, and these were given an item quality score. The item quality scores for each faculty member were averaged, and the difference from pre-training to post-training scores was evaluated. The results showed a significant difference between pre-training and post-training MCQ difference scores for the intervention group (p=0.04). This pilot study provides evidence that the training session of short duration was effective in improving the quality of in-house MCQs
Recommended from our members
Will a Short Training Session Improve Multiple-Choice Item-Writing Quality by Dental School Faculty? A Pilot Study.
Faculty members are expected to write high-quality multiple-choice questions (MCQs) in order to accurately assess dental students' achievement. However, most dental school faculty members are not trained to write MCQs. Extensive faculty development programs have been used to help educators write better test items. The aim of this pilot study was to determine if a short workshop would result in improved MCQ item-writing by dental school faculty at one U.S. dental school. A total of 24 dental school faculty members who had previously written MCQs were randomized into a no-intervention group and an intervention group in 2015. Six previously written MCQs were randomly selected from each of the faculty members and given an item quality score. The intervention group participated in a training session of one-hour duration that focused on reviewing standard item-writing guidelines to improve in-house MCQs. The no-intervention group did not receive any training but did receive encouragement and an explanation of why good MCQ writing was important. The faculty members were then asked to revise their previously written questions, and these were given an item quality score. The item quality scores for each faculty member were averaged, and the difference from pre-training to post-training scores was evaluated. The results showed a significant difference between pre-training and post-training MCQ difference scores for the intervention group (p=0.04). This pilot study provides evidence that the training session of short duration was effective in improving the quality of in-house MCQs
Dental Students\u27 Self-Assessment of Preclinical Examinations
Accurate self-assessment is an important attribute for practicing dentists and, therefore, an important skill to develop in dental students. Our purpose was to examine the relationship between faculty and student assessments of preclinical prosthodontic procedures. Seventy-six second-year students completed two consecutive examinations and two self-assessments. The examinations involved setting maxillary denture teeth on a model to simulate the clinical procedure of a complete maxillary denture. Results indicated no significant increases in examination or student self-assessment mean scores; however, regression analysis indicated changes in student self-assessment scores explained 16.3 percent of the variation in examination scores. In essence, improvement in student self-assessment predicted improvement in examination scores among dental students completing a preclinical dental procedure
Recommended from our members
Identifying student misconceptions in biomedical course assessments in dental education.
Dental student performance on examinations has traditionally been estimated by calculating the percentage of correct responses rather than by identifying student misconceptions. Although misconceptions can impede student learning and are refractory to change, they are seldom measured in biomedical courses in dental schools. Our purpose was to determine if scaling student confidence and the clinical impact of incorrect answers could be used on multiple-choice questions (MCQs) to identify potential student misconceptions. To provide a measure of student misconception, faculty members indicated the correct answer on twenty clinically relevant MCQs and noted whether the three distracters represented potentially benign, inappropriate, or harmful application of student knowledge to patient treatment. A group of 105 third-year dental students selected what they believed was the most appropriate answer and their level of sureness (1 to 4 representing very unsure, unsure, sure, and very sure) about their answer. Misconceptions were defined as sure or very sure incorrect responses that could result in inappropriate or harmful clinical treatment. In the results, 5.2 percent of the answers represented student misconceptions, and 74 percent of the misconceptions were from four case-based interpretation questions. The mean student sureness was 3.6 on a 4.0 scale. The students' sureness was higher with correct than with incorrect answers (p<0.001), yet there was no difference in sureness levels among their incorrect (benign, inappropriate, or harmful) responses (p>0.05). This study found that scaling student confidence and clinical impact of incorrect answers provided helpful insights into student thinking in multiple-choice assessment
Does Student Confidence on Multiple-Choice Question Assessments Provide Useful Information?
Context: Feedback from multiple-choice question (MCQ) assessments is typically limited to a percentage correct score, from which estimates of student competence are inferred. The students\u27 confidence in their answers and the potential impact of incorrect answers on clinical care are seldom recorded. Our purpose was to evaluate student confidence in incorrect responses and to establish how confidence was influenced by the potential clinical impact of answers, question type and gender.
Methods: This was an exploratory, cross-sectional study conducted using a convenience sample of 104 Year 3 dental students completing 20 MCQs on implant dentistry. Students were asked to select the most correct response and to indicate their confidence in it for each question. Identifying both correctness and confidence allowed the designation of uninformed (incorrect and not confident) or misinformed (incorrect but confident) responses. In addition to recording correct/incorrect responses and student confidence, faculty staff designated incorrect responses as benign, inappropriate or potentially harmful if applied to clinical care. Question type was identified as factual or complex. Logistic regression was used to evaluate relationships between student confidence, and question type and gender.
Results: Students were misinformed more often than uninformed (22% versus 8%), and misinformed responses were more common with complex than factual questions (p \u3c 0.05). Students were significantly more likely to be confident of correct than incorrect benign, incorrect inappropriate or incorrect harmful answers (p \u3c 0.001), but, contrary to expectations, confidence did not decrease as answers became more harmful.
Conclusions: Recording student confidence was helpful in identifying uninformed versus misinformed responses, which may allow for targeted remediation strategies. Making errors of calibration (confidence and accuracy) more visible may be relevant in feedback for professional development