227 research outputs found

    Principles and Practice of Case-based Clinical Reasoning Education: A Method for Preclinical Students

    Get PDF
    This volume describes and explains the educational method of Case-Based Clinical Reasoning (CBCR) used successfully in medical schools to prepare students to think like doctors before they enter the clinical arena and become engaged in patient care. Although this approach poses the paradoxical problem of a lack of clinical experience that is so essential for building proficiency in clinical reasoning, CBCR is built on the premise that solving clinical problems involves the ability to reason about disease processes. This requires knowledge of anatomy and the working and pathology of organ systems, as well as the ability to regard patient problems as patterns and compare them with instances of illness scripts of patients the clinician has seen in the past and stored in memory. CBCR stimulates the development of early, rudimentary illness scripts through elaboration and systematic discussion of the courses of action from the initial presentation of the patient to the final steps of clinical management. The book combines general backgrounds of clinical reasoning education and assessment with a detailed elaboration of the CBCR method for application in any medical curriculum, either as a mandatory or as an elective course. It consists of three parts: a general introduction to clinical reasoning education, application of the CBCR method, and cases that can used by educators to try out this method

    An Investigation of Professionalism Reflected by Student Comments on Formative Virtual Patient Encounters

    Get PDF
    Background: This study explored the use of virtual patient generated data by investigating the association between students’ unprofessional patient summary statements, which they entered during an on-line virtual patient case, and detection of their future unprofessional behavior. Method: At the USUHS, students complete a number of virtual patient encounters, including a patient summary, to meet the clerkship requirements of Internal Medicine, Family Medicine, and Pediatrics. We reviewed the summary statements of 343 students who graduated in 2012 and 2013. Each statement was rated with regard to four features: Unprofessional, Professional, Equivocal (could be construed as unprofessional), and Unanswered (students did not enter a statement). We also combined Unprofessional and Equivocal into a new category to indicate a statement receiving either rating. We then examined the associations of students’ scores on these categories (i.e. whether received a particular rating or not) and Expertise score and Professionalism score reflected by a post-graduate year one (PGY-1) program director (PD) evaluation form. The PD forms contained 58 Likert-scale items designed to measure the two constructs (Expertise and Professionalism). Results: The inter-rater reliability of statements coding was high (Cohen’s Kappa = .97). The measure of receiving an Unprofessional or Equivocal rating was significantly correlated with lower Expertise score (r = −.19, P \u3c .05) as well as lower Professionalism score (r = −.17, P \u3c .05) during PGY-1. Conclusion: Incident reports and review of routine student evaluations are what most schools rely on to identify the majority of professionalism lapses. Unprofessionalism reflected in student entries may provide additional markers foreshadowing subsequent unprofessional behavior

    Are Commonly Used Resident Measurements Associated with Procedural Skills in Internal Medicine Residency Training?

    Get PDF
    BACKGROUND: Acquisition of competence in performing a variety of procedures is essential during Internal Medicine (IM) residency training. PURPOSES: Determine the rate of procedural complications by IM residents; determine whether there was a correlation between having 1 or more complications and institutional procedural certification status or attending ratings of resident procedural skill competence on the American Board of Internal Medicine (ABIM) monthly evaluation form (ABIM-MEF). Assess if an association exists between procedural complications and in-training examination and ABIM board certification scores. METHODS: We retrospectively reviewed all procedure log sheets, procedural certification status, ABIM-MEF procedural skills ratings, in-training exam and certifying examination (ABIM-CE) scores from the period 1990–1999 for IM residency program graduates from a training program. RESULTS: Among 69 graduates, 2,212 monthly procedure log sheets and 2,475 ABIM-MEFs were reviewed. The overall complication rate was 2.3/1,000 procedures (95% CI: 1.4–3.1/1,000 procedure). With the exception of procedural certification status as judged by institutional faculty, there was no association between our resident measurements and procedural complications. CONCLUSIONS: Our findings support the need for a resident procedural competence certification system based on direct observation. Our data support the ABIM’s action to remove resident procedural competence from the monthly ABIM-MEF ratings

    Even a little sleepiness influences neural activation and clinical reasoning in novices

    Get PDF
    Funding: This study was funded by a grant from the Scottish Medical EducationResearch Consortium (SMERC). SMERC had no involvement in thestudy design; collection, analysis, and interpretation of data; writing ofthe report; or the decision to submit the report for publication. Acknowledgements: We thank the students who took part in this project, and the Instituteof Education for Medical and Dental Sciences, University of Aber-deen, for supporting this project. We thank the American College ofPhysicians for the questions used in this study. We thank ProfessorCLELANDET AL.7of9&C?JRFŃ„1AGCLACŃ„0CNMPRQSusan Jamieson, University of Glasgow, for her support at the stageof seeking funding for this work.Peer reviewedPublisher PD

    Teaching Cognitive Biases in Clinical Decision Making: A Case-Based Discussion

    Get PDF
    This resource consists of five case scenarios aimed to teach/test participants in identifying the inherent cognitive biases as well as in considering alternative diagnoses (“thinking out of the box”). These cases are embedded with cognitive biases commonly encountered in the clinical setting. By using a blueprint to guide the creation of these cases, at least two or more aspects of patient care are tested (e.g. history taking, physical exam, data interpretation, diagnosis). Theoretically, each of these cases is framed in such a way as to lead the participants into an obvious diagnosis. But besides the obvious diagnosis, there are subtle clinical cues that point to the likelihood of another more urgent or life threatening diagnosis that must be considered. The participants should be reminded that in real situations, the failure to consider these life-threatening conditions may be detrimental to the patient. Undergirding the construct of these cases is the theoretical basis that if the participants slow down and reflect on questions like "Is there any life or limb threat that I need to rule out in this patient?," "If I am wrong, what else could it be?," or "Do I have sufficient evidences to support or exclude this diagnosis?," the participants are more likely to avoid these cognitive biases and be able to pick up the second diagnoses

    When will I get my paper back? A replication study of publication timelines for health professions education research.

    Get PDF
    INTRODUCTION: Biomedical researchers have lamented the lengthy timelines from manuscript submission to publication and highlighted potential detrimental effects on scientific progress and scientists\u27 careers. In 2015, Himmelstein identified the mean time from manuscript submission to acceptance in biomedicine as approximately 100 days. The length of publication timelines in health professions education (HPE) is currently unknown. METHODS: This study replicates Himmelstein\u27s work with a sample of 14 HPE journals published between 2008-2018. Using PubMed, 19,182 article citations were retrieved. Open metadata for each were downloaded, including the date the article was received by the journal, date the authors resubmitted revisions, date the journal accepted the article, and date of entry into PubMed. Journals without publication history metadata were excluded. RESULTS: Publication history data were available for 55% (n = 8) of the journals sampled. The publication histories of 4,735 (25%) articles were analyzed. Mean time from: (1) author submission to journal acceptance was 180.93 days (SD = 103.89), (2) author submission to posting on PubMed was 263.55 days (SD = 157.61), and (3) journal acceptance to posting on PubMed was 83.15 days (SD = 135.72). DISCUSSION: This study presents publication metadata for journals that openly provide it-a first step towards understanding publication timelines in HPE. Findings confirm the replicability of the original study, and the limited data suggest that, in comparison to biomedical scientists broadly, medical educators may experience longer wait times for article acceptance and publication. Reasons for these delays are currently unknown and deserve further study; such work would be facilitated by increased public access to journal metadata

    A portable mnemonic to facilitate checking for cognitive errors

    Get PDF
    Background Although a clinician may have the intention of carrying out strategies to reduce cognitive errors, this intention may not be realized especially under heavy workload situations or following a period of interruptions. Implementing strategies to reduce cognitive errors in clinical setting may be facilitated by a portable mnemonic in the form of a checklist. Methods A 2-stage approach using both qualitative and quantitative methods was used in the development and evaluation of a mnemonic checklist. In the development stage, a focus-driven literature search and a face-to-face discussion with a content expert in cognitive errors were carried out. Categories of cognitive errors addressed and represented in the checklist were identified. In the judgment stage, the face and content validity of the categories of cognitive errors represented in the checklist were determined. This was accomplished through coding responses of a panel of experts in cognitive errors. Results From the development stage, a preliminary version of the checklist in the form of four questions represented by four specific letters was developed. The letter ‘T’ in the TWED checklist stands for ‘Threat’ (i.e., ‘is there any life or limb threat that I need to rule out in this patient?’), ‘W’ for ‘Wrong/What else’ (i.e., ‘What if I am wrong? What else could it be?’), ‘E’ for ‘evidences’ (i.e., ‘Do I have sufficient evidences to support or exclude this diagnosis?’), and ‘D’ for ‘dispositional factors’ (i.e., ‘is there any dispositional factor that influence my decision’). In the judgment stage, the content validity of most categories of cognitive errors addressed in the checklist was rated highly in terms of their relevance and representativeness (with modified kappa values ranging from 0.65 to 1.0). Based on the coding of responses from seven experts, this checklist was shown to be sufficiently comprehensive to activate the implementation intention of checking cognitive errors. Conclusion The TWED checklist is a portable mnemonic checklist that can be used to activate implementation intentions for checking cognitive errors in clinical settings. While its mnemonic structure eases recall, its brevity makes it portable for quick application in every clinical case until it becomes habitual in daily clinical practice. Electronic supplementary materia

    Clinical reasoning: What do nurses, physicians, and students reason about.

    Get PDF
    Clinical reasoning is a core ability in the health professions, but the term is conceptualised in multiple ways within and across professions. For interprofessional teamwork it is indispensable to recognise the differences in understanding between professions. Therefore, our aim was to investigate how nurses, physicians, and medical and nursing students define clinical reasoning. We conducted 43 semi-structured interviews with an interprofessional group from six countries and qualitatively analysed their definitions of clinical reasoning based on a coding guide. Our results showed similarities across professions, such as the emphasis on clinical skills as part of clinical reasoning. But we also revealed differences, such as a more patient-centered view and a broader understanding of the clinical reasoning concept in nurses and nursing students. The explicit sharing and discussion of differences in the understanding of clinical reasoning across health professions can provide valuable insights into the perspectives of different team members on clinical practice and education. This understanding may lead to improved interprofessional collaboration, and our study's categories and themes can serve as a basis for such discussions
    • 

    corecore