18 research outputs found
Clinical reasoning education at US medical schools: results from a national survey of internal medicine clerkship directors.
BACKGROUND: Recent reports, including the Institute of Medicine\u27s Improving Diagnosis in Health Care, highlight the pervasiveness and underappreciated harm of diagnostic error, and recommend enhancing health care professional education in diagnostic reasoning. However, little is known about clinical reasoning curricula at US medical schools.
OBJECTIVE: To describe clinical reasoning curricula at US medical schools and to determine the attitudes of internal medicine clerkship directors toward teaching of clinical reasoning.
DESIGN: Cross-sectional multicenter study.
PARTICIPANTS: US institutional members of the Clerkship Directors in Internal Medicine (CDIM).
MAIN MEASURES: Examined responses to a survey that was emailed in May 2015 to CDIM institutional representatives, who reported on their medical school\u27s clinical reasoning curriculum.
KEY RESULTS: The response rate was 74% (91/123). Most respondents reported that a structured curriculum in clinical reasoning should be taught in all phases of medical education, including the preclinical years (64/85; 75%), clinical clerkships (76/87; 87%), and the fourth year (75/88; 85%), and that more curricular time should be devoted to the topic. Respondents indicated that most students enter the clerkship with only poor (25/85; 29%) to fair (47/85; 55%) knowledge of key clinical reasoning concepts. Most institutions (52/91; 57%) surveyed lacked sessions dedicated to these topics. Lack of curricular time (59/67, 88%) and faculty expertise in teaching these concepts (53/76, 69%) were identified as barriers.
CONCLUSIONS: Internal medicine clerkship directors believe that clinical reasoning should be taught throughout the 4 years of medical school, with the greatest emphasis in the clinical years. However, only a minority reported having teaching sessions devoted to clinical reasoning, citing a lack of curricular time and faculty expertise as the largest barriers. Our findings suggest that additional institutional and national resources should be dedicated to developing clinical reasoning curricula to improve diagnostic accuracy and reduce diagnostic error
Recommended from our members
Evaluation of a novel assessment form for observing medical residents: a randomised, controlled trial.
CONTEXT: Teaching faculty cannot reliably distinguish between satisfactory and unsatisfactory resident performances and give non-specific feedback.
OBJECTIVES: This study aimed to test whether a novel rating form can improve faculty accuracy in detecting unsatisfactory performances, generate more rater observations and improve feedback quality.
METHODS: Participants included two groups of 40 internal medicine residency faculty staff. Both groups received 1-hour training on how to rate trainees in the mini-clinical evaluation exercise (mini-CEX) format. The intervention group was given a new rating form structured with prompts, space for free-text comments, behavioural anchors and fewer scoring levels, whereas the control group used the current American Board of Internal Medicine Mini-CEX form. Participants watched and scored six scripted videotapes of resident performances 2-3 weeks after the training session.
RESULTS: Intervention group participants were more accurate in discriminating satisfactory from unsatisfactory performances (85% versus 73% correct; odds ratio [OR] 2.13, 95% confidence interval [CI] 1.16-3.14, P = 0.02) and yielded more correctly identified unsatisfactory performances (96% versus 52% correct; OR 25.35, 95% CI 9.12-70.46), but were less accurate in identifying satisfactory performances (73% versus 95% correct; OR 0.15, 95% CI 0.05-0.39). Intervention group participants averaged one fewer declared intended feedback item (4.7 versus 5.7) and showed no difference in the amount of feedback that was above minimal in quality. Intervention group participants generated more written evaluative observations (10.8 versus 5.7). Inter-rater agreement improved with the new form (Fleiss\u27 kappa, 0.52 versus 0.30).
CONCLUSIONS: Modifying the currently used direct observations process may produce more recorded observations, increase inter-rater agreement and improve overall rater accuracy, but it may also increase severity error