2 research outputs found

    Assessment at UK medical schools varies substantially in volume, type and intensity and correlates with postgraduate attainment

    Get PDF
    BACKGROUND: In the United Kingdom (UK), medical schools are free to develop local systems and policies that govern student assessment and progression. Successful completion of an undergraduate medical degree results in the automatic award of a provisional licence to practice medicine by the General Medical Council (GMC). Such a licensing process relies heavily on the assumption that individual schools develop similarly rigorous assessment policies. Little work has evaluated variability of undergraduate medical assessment between medical schools. That absence is important in the light of the GMC's recent announcement of the introduction of the UKMLA (UK Medical Licensing Assessment) for all doctors who wish to practise in the UK. The present study aimed to quantify and compare the volume, type and intensity of summative assessment across medicine (A100) courses in the United Kingdom, and to assess whether intensity of assessment correlates with the postgraduate attainment of doctors from these schools. METHODS: Locally knowledgeable students in each school were approached to take part in guided-questionnaire interviews via telephone or Skype(TM). Their understanding of assessment at their medical school was probed, and later validated with the assessment department of the respective medical school. We gathered data for 25 of 27 A100 programmes in the UK and compared volume, type and intensity of assessment between schools. We then correlated these data with the mean first-attempt score of graduates sitting MRCGP and MRCP(UK), as well as with UKFPO selection measures. RESULTS: The median written assessment volume across all schools was 2000 min (mean = 2027, SD = 586, LQ = 1500, UQ = 2500, range = 1000-3200) and 1400 marks (mean = 1555, SD = 463, LQ = 1200, UQ = 1800, range = 1100-2800). The median practical assessment volume was 400 min (mean = 472, SD = 207, LQ = 400, UQ = 600, range = 200-1000). The median intensity (minutes per mark ratio) of summative written assessment was 1.24 min per mark (mean = 1.28, SD = 0.30, LQ = 1.11, UQ = 1.37, range = 0.85-2.08). An exploratory analysis suggested a significant correlation of total assessment time with mean first-attempt score on both the knowledge and the clinical assessments of MRCGP and of MRCP(UK). CONCLUSIONS: There are substantial differences in the volume, format and intensity of undergraduate assessment between UK medical schools. These findings suggest a potential for differences in the reliability of detecting poorly performing students, or differences in identifying and stratifying academically equivalent students for ranking in the Foundation Programme Application System (FPAS). Furthermore, these differences appear to directly correlate with performance in postgraduate examinations. Taken together, our findings highlight highly variable local assessment procedures that warrant further investigation to establish their potential impact on students

    The impact of large scale licensing examinations in highly developed countries: a systematic review

    Get PDF
    BACKGROUND: To investigate the existing evidence base for the validity of large-scale licensing examinations including their impact. METHODS: Systematic review against a validity framework exploring: Embase (Ovid Medline); Medline (EBSCO); PubMed; Wiley Online; ScienceDirect; and PsychINFO from 2005 to April 2015. All papers were included when they discussed national or large regional (State level) examinations for clinical professionals, linked to examinations in early careers or near the point of graduation, and where success was required to subsequently be able to practice. Using a standardized data extraction form, two independent reviewers extracted study characteristics, with the rest of the team resolving any disagreement. A validity framework was used as developed by the American Educational Research Association, American Psychological Association, and National Council on Measurement in Education to evaluate each paper’s evidence to support or refute the validity of national licensing examinations. RESULTS: 24 published articles provided evidence of validity across the five domains of the validity framework. Most papers (n = 22) provided evidence of national licensing examinations relationships to other variables and their consequential validity. Overall there was evidence that those who do well on earlier or on subsequent examinations also do well on national testing. There is a correlation between NLE performance and some patient outcomes and rates of complaints, but no causal evidence has been established. CONCLUSIONS: The debate around licensure examinations is strong on opinion but weak on validity evidence. This is especially true of the wider claims that licensure examinations improve patient safety and practitioner competence
    corecore