16 research outputs found

    Ready or Not, Here They Come: Acting Interns’ Experience and Perceived Competency Performing Basic Medical Procedures

    Get PDF
    OBJECTIVE: To assess acting interns’ (AI’s) experience with and perceived level of competency performing 6 basic medical procedures. DESIGN: Fourth-year medical students at the University of Cincinnati (UCCOM) are required to complete 2 AI rotations in Internal Medicine. All AIs in 2003–2004 (n = 150) and 2004–2005 (n = 151) were asked to complete a survey about whether during each of their rotations they had performed and felt competent performing the following procedures: phlebotomy, intravenous (IV) catheter insertion, arterial blood gas (ABG), nasogastric (NG) tube insertion, lumbar puncture (LP), and Foley catheter insertion. RESULTS: Four hundred sixty-seven of 601 possible surveys (across both years and both rotations) were completed (78% response rate). During both rotations, relatively few students performed the procedures, ranging from 9% for Foley catheter insertion (24/208) to 50% for both ABG and NG tube insertion (130/259). The two procedures most often performed were ABG (range 46–50%) and NG tube insertion (range 42–50%). Feelings of competency varied from 12% (LP) to 82% (Foley catheter). Except for LP, if students performed a procedure at least once, they reported feeling more competent (range 85% for ABG to 96% for Foley catheter insertion). Among the students who performed LP during a rotation, many still did not feel competent performing LPs: 23 (74%) in rotation 1 and 20 (40%) in rotation 2. CONCLUSION: Many fourth-year students at UCCOM do not perform basic procedures during their acting internship rotations. Procedural performance correlates with feelings of competency. Lumbar puncture competency may be too ambitious a goal for medical students

    A generalizability study of the medical judgment vignettes interview to assess students' noncognitive attributes for medical school

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Although the reliability of admission interviews has been improved through the use of objective and structured approaches, there still remains the issue of identifying and measuring relevant attributes or noncognitive domains of interest. In this present study, we use generalizability theory to determine the estimated variance associated with participants, judges and stations from a semi-structured, Medical Judgment Vignettes interview used as part of an initiative to improve the reliability and content validity of the interview process used in the selection of students for medical school.</p> <p>Methods</p> <p>A three station, Medical Judgment Vignettes interview was conducted with 29 participants and scored independently by two judges on a well-defined 5-point rubric. Generalizability Theory provides a method for estimating the variability of a number of facets. In the present study each judge (<it>j</it>) rated each participant (<it>p</it>) on all three Medical Judgment Vignette stations (<it>s</it>). A two-facet crossed designed generalizability study was used to determine the optimal number of stations and judges to achieve a 0.80 reliability coefficient.</p> <p>Results</p> <p>The results of the generalizability analysis showed that a three station, two judge Medical Judgment Vignettes interview results in a G coefficient of 0.70. As shown by the adjusted <it>Eρ</it><sup>2 </sup>scores, since interviewer variability is negligible, increasing the number of judges from two to three does not improve the generalizability coefficient. Increasing the number of stations, however, does have a substantial influence on the overall dependability of this measurement. In a decision study analysis, increasing the number of stations to six with a single judge at each station results in a G coefficient of 0.81.</p> <p>Conclusion</p> <p>The Medical Judgment Vignettes interview provides a reliable approach to the assessment of candidates' noncognitive attributes for medical school. The high inter-rater reliability is attributed to the greater objectivity achieved through the used of the semi-structured interview format and clearly defined scoring rubric created for each of the judgment vignettes. Despite the relatively high generalizability coefficient obtained for only three stations, future research should further explore the reliability, and equally importantly, the validity of the vignettes with a large group of candidates applying for medical school.</p

    Comparison between Long-Menu and Open-Ended Questions in computerized medical assessments. A randomized controlled trial

    Get PDF
    BACKGROUND: Long-menu questions (LMQs) are viewed as an alternative method for answering open-ended questions (OEQs) in computerized assessment. So far this question type and its influence on examination scores have not been studied sufficiently. However, the increasing use of computerized assessments will also lead to an increasing use of this question type. Using a summative online key feature (KF) examination we evaluated whether LMQs can be compared with OEQs in regard to the level of difficulty, performance and response times. We also evaluated the content for its suitability for LMQs. METHODS: We randomized 146 fourth year medical students into two groups. For the purpose of this study we created 7 peer-reviewed KF-cases with a total of 25 questions. All questions had the same content in both groups, but nine questions had a different answer type. Group A answered 9 questions with an LM type, group B with an OE type. In addition to the LM answer, group A could give an OE answer if the appropriate answer was not included in the list. RESULTS: The average number of correct answers for LMQs and OEQs showed no significant difference (p = 0.93). Among all 630 LM answers only one correct term (0.32%) was not included in the list of answers. The response time for LMQs did not significantly differ from that of OEQs (p = 0.65). CONCLUSION: LMQs and OEQs do not differ significantly. Compared to standard multiple-choice questions (MCQs), the response time for LMQs and OEQs is longer. This is probably due to the fact that they require active problem solving skills and more practice. LMQs correspond more suitable to Short answer questions (SAQ) then to OEQ and should only be used when the answers can be clearly phrased, using only a few, precise synonyms. LMQs can decrease cueing effects and significantly simplify the scoring in computerized assessment

    Internet Resources for Curriculum Development in Medical Education: An Annotated Bibliography

    No full text
    Curriculum development in medical education should be a methodical and scholarly, yet practical process that addresses the needs of trainees, patients, and society. To be maximally efficient and effective, it should build upon previous work and use existing resources. A conventional search of the literature is necessary, but insufficient for this purpose. The internet provides a rich source of information and materials. This bibliography is a guide to internet resources that are of use to curriculum developers, organized into 1) medical accreditation bodies, 2) topic-oriented resources, 3) general educational resources within medicine, and 4) general education resources beyond medicine
    corecore