235 research outputs found

    The reliability of in-training assessment when performance improvement is taken into account

    Get PDF
    During in-training assessment students are frequently assessed over a longer period of time and therefore it can be expected that their performance will improve. We studied whether there really is a measurable performance improvement when students are assessed over an extended period of time and how this improvement affects the reliability of the overall judgement. In-training assessment results were obtained from 104 students on rotation at our university hospital or at one of the six affiliated hospitals. Generalisability theory was used in combination with multilevel analysis to obtain reliability coefficients and to estimate the number of assessments needed for reliable overall judgement, both including and excluding performance improvement. Students’ clinical performance ratings improved significantly from a mean of 7.6 at the start to a mean of 7.8 at the end of their clerkship. When taking performance improvement into account, reliability coefficients were higher. The number of assessments needed to achieve a reliability of 0.80 or higher decreased from 17 to 11. Therefore, when studying reliability of in-training assessment, performance improvement should be considered

    Evaluation of effectiveness of instruction and study habits in two consecutive clinical semesters of the medical curriculum munich (MeCuM) reveals the need for more time for self study and higher frequency of assessment

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Seven years after implementing a new curriculum an evaluation was performed to explore possibilities for improvements.</p> <p>Purposes: To analyze students' study habits in relation to exam frequency and to evaluate effectiveness of instruction.</p> <p>Methods</p> <p>Time spent on self study (TSS) and the quantity of instruction (QI) was assessed during the internal medicine and the surgical semester. Students and faculty members were asked about study habits and their evaluation of the current curriculum.</p> <p>Results</p> <p>The TSS/QI ratio as a measure of effectiveness of instruction ranges mainly below 1.0 and rises only prior to exams. Students and teachers prefer to have multiple smaller exams over the course of the semester. Furthermore, students wish to have more time for self-guided study.</p> <p>Conclusions</p> <p>The TSS/QI ratio is predominantly below the aspired value of 1.0. Furthermore, the TSS/QI ratio is positively related to test frequency. We therefore propose a reduction of compulsory lessons and an increase in test frequency.</p

    Introducing a reward system in assessment in histology: A comment on the learning strategies it might engender

    Get PDF
    BACKGROUND: Assessment, as an inextricable component of the curriculum, is an important factor influencing student approaches to learning. If assessment is to drive learning, then it must assess the desired outcomes. In an effort to alleviate some of the anxiety associated with a traditional discipline-based second year of medical studies, a bonus system was introduced into the Histology assessment. Students obtaining a year mark of 70% were rewarded with full marks for some tests, resulting in many requiring only a few percentage points in the final examination to pass Histology. METHODS: In order to ascertain whether this bonus system might be impacting positively on student learning, thirty-two second year medical students (non-randomly selected, representing four academic groups based on their mid-year results) were interviewed in 1997 and, in 1999, the entire second year class completed a questionnaire (n = 189). Both groups were asked their opinions of the bonus system. RESULTS: Both groups overwhelming voted in favour of the bonus system, despite less than 45% of students failing to achieve it. Students commented that it relieved some of the stress of the year-end examinations, and was generally motivating with regard to their work commitment. CONCLUSIONS: Being satisfied with how and what we assess in Histology, we are of the opinion that this reward system may contribute to engendering appropriate learning approaches (i.e. for understanding) in students. As a result of its apparent positive influence on learning and attitudes towards learning, this bonus system will continue to operate until the traditional programme is phased out. It is hoped that other educators, believing that their assessment is a reflection of the intended outcomes, might recognise merit in rewarding students for consistent achievement

    Joining the dots: Conditional pass and programmatic assessment enhances recognition of problems with professionalism and factors hampering student progress

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Programmatic assessment that looks across a whole year may contribute to better decisions compared with those made from isolated assessments alone. The aim of this study is to describe and evaluate a programmatic system to handle student assessment results that is aligned not only with learning and remediation, but also with defensibility. The key components are standards based assessments, use of "Conditional Pass", and regular progress meetings.</p> <p>Methods</p> <p>The new assessment system is described. The evaluation is based on years 4-6 of a 6-year medical course. The types of concerns staff had about students were clustered into themes alongside any interventions and outcomes for the students concerned. The likelihoods of passing the year according to type of problem were compared before and after phasing in of the new assessment system.</p> <p>Results</p> <p>The new system was phased in over four years. In the fourth year of implementation 701 students had 3539 assessment results, of which 4.1% were Conditional Pass. More in-depth analysis for 1516 results available from 447 students revealed the odds ratio (95% confidence intervals) for failure was highest for students with problems identified in more than one part of the course (18.8 (7.7-46.2) p < 0.0001) or with problems with professionalism (17.2 (9.1-33.3) p < 0.0001). The odds ratio for failure was lowest for problems with assignments (0.7 (0.1-5.2) NS). Compared with the previous system, more students failed the year under the new system on the basis of performance during the year (20 or 4.5% compared with four or 1.1% under the previous system (p < 0.01)).</p> <p>Conclusions</p> <p>The new system detects more students in difficulty and has resulted in less "failure to fail". The requirement to state conditions required to pass has contributed to a paper trail that should improve defensibility. Most importantly it has helped detect and act on some of the more difficult areas to assess such as professionalism.</p

    Social Media in the Emergency Medicine Residency Curriculum: Social Media Responses to the Residents’ Perspective Article

    Full text link
    In July to August 2014, Annals of Emergency Medicine continued a collaboration with an academic Web site, Academic Life in Emergency Medicine (ALiEM), to host an online discussion session featuring the 2014 Annals Residents' Perspective article "Integration of Social Media in Emergency Medicine Residency Curriculum" by Scott et al. The objective was to describe a 14-day worldwide clinician dialogue about evidence, opinions, and early relevant innovations revolving around the featured article and made possible by the immediacy of social media technologies. Six online facilitators hosted the multimodal discussion on the ALiEM Web site, Twitter, and YouTube, which featured 3 preselected questions. Engagement was tracked through various Web analytic tools, and themes were identified by content curation. The dialogue resulted in 1,222 unique page views from 325 cities in 32 countries on the ALiEM Web site, 569,403 Twitter impressions, and 120 views of the video interview with the authors. Five major themes we identified in the discussion included curriculum design, pedagogy, and learning theory; digital curation skills of the 21st-century emergency medicine practitioner; engagement challenges; proposed solutions; and best practice examples. The immediacy of social media technologies provides clinicians the unique opportunity to engage a worldwide audience within a relatively short time frame

    Modifying Hofstee standard setting for assessments that vary in difficulty, and to determine boundaries for different levels of achievement.

    Get PDF
    BACKGROUND: Fixed mark grade boundaries for non-linear assessment scales fail to account for variations in assessment difficulty. Where assessment difficulty varies more than ability of successive cohorts or the quality of the teaching, anchoring grade boundaries to median cohort performance should provide an effective method for setting standards. METHODS: This study investigated the use of a modified Hofstee (MH) method for setting unsatisfactory/satisfactory and satisfactory/excellent grade boundaries for multiple choice question-style assessments, adjusted using the cohort median to obviate the effect of subjective judgements and provision of grade quotas. RESULTS: Outcomes for the MH method were compared with formula scoring/correction for guessing (FS/CFG) for 11 assessments, indicating that there were no significant differences between MH and FS/CFG in either the effective unsatisfactory/satisfactory grade boundary or the proportion of unsatisfactory graded candidates (p > 0.05). However the boundary for excellent performance was significantly higher for MH (p < 0.01), and the proportion of candidates returned as excellent was significantly lower (p < 0.01). MH also generated performance profiles and pass marks that were not significantly different from those given by the Ebel method of criterion-referenced standard setting. CONCLUSIONS: This supports MH as an objective model for calculating variable grade boundaries, adjusted for test difficulty. Furthermore, it easily creates boundaries for unsatisfactory/satisfactory and satisfactory/excellent performance that are protected against grade inflation. It could be implemented as a stand-alone method of standard setting, or as part of the post-examination analysis of results for assessments for which pre-examination criterion-referenced standard setting is employed
    corecore