3 research outputs found

    An assessment of functioning and non-functioning distractors in multiple-choice questions: a descriptive analysis

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Four- or five-option multiple choice questions (MCQs) are the standard in health-science disciplines, both on certification-level examinations and on in-house developed tests. Previous research has shown, however, that few MCQs have three or four functioning distractors. The purpose of this study was to investigate non-functioning distractors in teacher-developed tests in one nursing program in an English-language university in Hong Kong.</p> <p>Methods</p> <p>Using item-analysis data, we assessed the proportion of non-functioning distractors on a sample of seven test papers administered to undergraduate nursing students. A total of 514 items were reviewed, including 2056 options (1542 distractors and 514 correct responses). Non-functioning options were defined as ones that were chosen by fewer than 5% of examinees and those with a positive option discrimination statistic.</p> <p>Results</p> <p>The proportion of items containing 0, 1, 2, and 3 functioning distractors was 12.3%, 34.8%, 39.1%, and 13.8% respectively. Overall, items contained an average of 1.54 (SD = 0.88) functioning distractors. Only 52.2% (n = 805) of all distractors were functioning effectively and 10.2% (n = 158) had a choice frequency of 0. Items with more functioning distractors were more difficult and more discriminating.</p> <p>Conclusion</p> <p>The low frequency of items with three functioning distractors in the four-option items in this study suggests that teachers have difficulty developing plausible distractors for most MCQs. Test items should consist of as many options as is feasible given the item content and the number of plausible distractors; in most cases this would be three. Item analysis results can be used to identify and remove non-functioning distractors from MCQs that have been used in previous tests.</p

    An investigation into the optimal number of distractors in single-best answer exams

    No full text
    In UK medical schools, five-option single-best answer (SBA) questions are the most widely accepted format of summative knowledge assessment. However, writing SBA questions with four effective incorrect options is difficult and time consuming, and consequently, many SBAs contain a high frequency of implausible distractors. Previous research has suggested that fewer than five-options could hence be used for assessment, without deterioration in quality. Despite an existing body of empirical research in this area however, evidence from undergraduate medical education is sparse. The study investigated the frequency of non-functioning distractors in a sample of 480 summative SBA questions at Cardiff University. Distractor functionality was analysed, and then various question models were tested to investigate the impact of reducing the number of distractors per question on examination difficulty, reliability, discrimination and pass rates. A survey questionnaire was additionally administered to 108 students (33 % response rate) to gain insight into their perceptions of these models. The simulation of various exam models revealed that, for four and three-option SBA models, pass rates, reliability, and mean item discrimination remained relatively constant. The average percentage mark however consistently increased by 1–3 % with the four and three-option models, respectively. The questionnaire survey revealed that the student body had mixed views towards the proposed format change. This study is one of the first to comprehensively investigate distractor performance in SBA examinations in undergraduate medical education. It provides evidence to suggest that using three-option SBA questions would maximise efficiency whilst maintaining, or possibly improving, psychometric quality, through allowing a greater number of questions per exam paper
    corecore