4 research outputs found

    Are Commonly Used Resident Measurements Associated with Procedural Skills in Internal Medicine Residency Training?

    Get PDF
    BACKGROUND: Acquisition of competence in performing a variety of procedures is essential during Internal Medicine (IM) residency training. PURPOSES: Determine the rate of procedural complications by IM residents; determine whether there was a correlation between having 1 or more complications and institutional procedural certification status or attending ratings of resident procedural skill competence on the American Board of Internal Medicine (ABIM) monthly evaluation form (ABIM-MEF). Assess if an association exists between procedural complications and in-training examination and ABIM board certification scores. METHODS: We retrospectively reviewed all procedure log sheets, procedural certification status, ABIM-MEF procedural skills ratings, in-training exam and certifying examination (ABIM-CE) scores from the period 1990–1999 for IM residency program graduates from a training program. RESULTS: Among 69 graduates, 2,212 monthly procedure log sheets and 2,475 ABIM-MEFs were reviewed. The overall complication rate was 2.3/1,000 procedures (95% CI: 1.4–3.1/1,000 procedure). With the exception of procedural certification status as judged by institutional faculty, there was no association between our resident measurements and procedural complications. CONCLUSIONS: Our findings support the need for a resident procedural competence certification system based on direct observation. Our data support the ABIM’s action to remove resident procedural competence from the monthly ABIM-MEF ratings

    Assessing the Reliability and Validity of the Mini Clinical Evaluation Exercise for Internal Medicine Residency Training

    No full text
    Abstract PURPOSE: The mini-clinical evaluation exercise, or mini-CEX, assesses residents\u27 history and physical examination skills. To date, no study has assessed the validity of the mini-CEX (mCEX) evaluation format. The authors\u27 objective was to determine the reliability and validity of the mCEX evaluation format. METHOD: Twenty-three first-year residents at Wright-Patterson Medical Center in Dayton, Ohio, were included in the study (academic years 1996-97, 1997-98, and 1998-99). Validity of the instrument was determined by comparing mCEX scores with scores from corresponding sections of a modified version of the standard American Board of Internal Medicine\u27s (ABIM\u27s) monthly evaluation form (MEF) and the American College of Physicians-American Society of Internal Medicine In-Training Examination (ITE). All ABIM MEFs were used without exclusionary criteria, including ABIM MEFs from months where a corresponding mCEX evaluation was not performed. RESULTS: Each resident in the study had an average of seven mCEX evaluations and 12 ABIM MEFs. Of the 168 required mCEX evaluations, 162 were studied. Internal consistency reliability was .90. Statistically significant correlations were found for the following: mCEX history with ABIM history; mCEX physical exam with ABIM physical exam; mCEX clinical judgment with ABIM clinical judgment, medical care, medical knowledge, and the ITE; mCEX humanistic attributes with ABIM humanistic attributes, and mCEX overall clinical competence with ABIM overall clinical competence, medical care, medical knowledge, and the ITE. Analysis of variance comparing sequential mean mCEX scores yielded no significant difference. CONCLUSIONS: This study suggests that the mCEX is a feasible and reliable evaluation tool. The validity of the mCEX is supported by the strong correlations between mCEX scores and corresponding ABIM MEF scores as well as the ITE

    Assessing the Reliability and Validity of the Mini Clinical Evaluation Exercise for Internal Medicine Residency Training

    No full text
    Abstract PURPOSE: The mini-clinical evaluation exercise, or mini-CEX, assesses residents\u27 history and physical examination skills. To date, no study has assessed the validity of the mini-CEX (mCEX) evaluation format. The authors\u27 objective was to determine the reliability and validity of the mCEX evaluation format. METHOD: Twenty-three first-year residents at Wright-Patterson Medical Center in Dayton, Ohio, were included in the study (academic years 1996-97, 1997-98, and 1998-99). Validity of the instrument was determined by comparing mCEX scores with scores from corresponding sections of a modified version of the standard American Board of Internal Medicine\u27s (ABIM\u27s) monthly evaluation form (MEF) and the American College of Physicians-American Society of Internal Medicine In-Training Examination (ITE). All ABIM MEFs were used without exclusionary criteria, including ABIM MEFs from months where a corresponding mCEX evaluation was not performed. RESULTS: Each resident in the study had an average of seven mCEX evaluations and 12 ABIM MEFs. Of the 168 required mCEX evaluations, 162 were studied. Internal consistency reliability was .90. Statistically significant correlations were found for the following: mCEX history with ABIM history; mCEX physical exam with ABIM physical exam; mCEX clinical judgment with ABIM clinical judgment, medical care, medical knowledge, and the ITE; mCEX humanistic attributes with ABIM humanistic attributes, and mCEX overall clinical competence with ABIM overall clinical competence, medical care, medical knowledge, and the ITE. Analysis of variance comparing sequential mean mCEX scores yielded no significant difference. CONCLUSIONS: This study suggests that the mCEX is a feasible and reliable evaluation tool. The validity of the mCEX is supported by the strong correlations between mCEX scores and corresponding ABIM MEF scores as well as the ITE
    corecore