16 research outputs found

    Sources of Measurement Error in an ECG Examination: Implications for Performance-Based Assessments

    Get PDF
    Objective: To assess the sources of measurement error in an electrocardiogram (ECG) interpretation examination given in a third-year internal medicine clerkship. Design: Three successive generalizability studies were conducted. 1) Multiple faculty rated student responses to a previously administered exam. 2) The rating criteria were revised and study 1 was repeated. 3) The examination was converted into an extended matching format including multiple cases with the same underlying cardiac problem. Results: The discrepancies among raters (main effects and interactions) were dwarfed by the error associated with case specificity. The largest source of the differences among raters was in rating student errors of commission rather than student errors of omission. Revisions in the rating criteria may have helped increase inter-rater reliability slightly however, due to case specificity, it had little impact on the overall reliability of the exam. The third study indicated the majority of the variability in student performance across cases was in performance across cases within the same type of cardiac problem rather than between different types of cardiac problems. Conclusions: Case specificity was the overwhelming source of measurement error. The variation among cases came mainly from discrepancies in performance between examples of the same cardiac problem rather than from differences in performance across different types of cardiac problems. This suggests it is necessary to include a large number of cases even if the goal is to assess performance on only a few types of cardiac problems

    Using cloud-based mobile technology for assessment of competencies among medical students

    Get PDF
    Valid, direct observation of medical student competency in clinical settings remains challenging and limits the opportunity to promote performance-based student advancement. The rationale for direct observation is to ascertain that students have acquired the core clinical competencies needed to care for patients. Too often student observation results in highly variable evaluations which are skewed by factors other than the student’s actual performance. Among the barriers to effective direct observation and assessment include the lack of effective tools and strategies for assuring that transparent standards are used for judging clinical competency in authentic clinical settings. We developed a web-based content management system under the name, Just in Time Medicine (JIT), to address many of these issues. The goals of JIT were fourfold: First, to create a self-service interface allowing faculty with average computing skills to author customizable content and criterion-based assessment tools displayable on internet enabled devices, including mobile devices; second, to create an assessment and feedback tool capable of capturing learner progress related to hundreds of clinical skills; third, to enable easy access and utilization of these tools by faculty for learner assessment in authentic clinical settings as a means of just in time faculty development; fourth, to create a permanent record of the trainees’ observed skills useful for both learner and program evaluation. From July 2010 through October 2012, we implemented a JIT enabled clinical evaluation exercise (CEX) among 367 third year internal medicine students. Observers (attending physicians and residents) performed CEX assessments using JIT to guide and document their observations, record their time observing and providing feedback to the students, and their overall satisfaction. Inter-rater reliability and validity were assessed with 17 observers who viewed six videotaped student-patient encounters and by measuring the correlation between student CEX scores and their scores on subsequent standardized-patient OSCE exams. A total of 3567 CEXs were completed by 516 observers. The average number of evaluations per student was 9.7 (±1.8 SD) and the average number of CEXs completed per observer was 6.9 (±15.8 SD). Observers spent less than 10 min on 43–50% of the CEXs and 68.6% on feedback sessions. A majority of observers (92%) reported satisfaction with the CEX. Inter-rater reliability was measured at 0.69 among all observers viewing the videotapes and these ratings adequately discriminated competent from non-competent performance. The measured CEX grades correlated with subsequent student performance on an end-of-year OSCE. We conclude that the use of JIT is feasible in capturing discrete clinical performance data with a high degree of user satisfaction. Our embedded checklists had adequate inter-rater reliability and concurrent and predictive validity

    A randomized trial comparing digital and live lecture formats [ISRCTN40455708

    Get PDF
    BACKGROUND: Medical education is increasingly being conducted in community-based teaching sites at diverse locations, making it difficult to provide a consistent curriculum. We conducted a randomized trial to assess whether students who viewed digital lectures would perform as well on a measure of cognitive knowledge as students who viewed live lectures. Students' perceptions of the digital lecture format and their opinion as whether a digital lecture format could serve as an adequate replacement for live lectures was also assessed. METHODS: Students were randomized to either attend a lecture series at our main campus or view digital versions of the same lectures at community-based teaching sites. Both groups completed the same examination based on the lectures, and the group viewing the digital lectures completed a feedback form on the digital format. RESULTS: There were no differences in performance as measured by means or average rank. Despite technical problems, the students who viewed the digital lectures overwhelmingly felt the digital lectures could replace live lectures. CONCLUSIONS: This study provides preliminary evidence digital lectures can be a viable alternative to live lectures as a means of delivering didactic presentations in a community-based setting

    The implementation of a mobile problem-specific electronic CEX for assessing directly observed student—patient encounters

    Get PDF
    Background: Facilitating direct observation of medical students' clinical competencies is a pressing need. Methods: We developed an electronic problem-specific Clinical Evaluation Exercise (eCEX) based on a national curriculum. We assessed its feasibility in monitoring and recording students' competencies and the impact of a grading incentive on the frequency of direct observations in an internal medicine clerkship. Students (n=56) at three clinical sites used the eCEX and comparison students (n=56) at three other clinical sites did not. Students in the eCEX group were required to arrange 10 evaluations with faculty preceptors. Students in the second group were required to document a single, faculty observed ‘Full History and Physical’ encounter with a patient. Students and preceptors were surveyed at the end of each rotation. Results: eCEX increased students' and evaluators' understanding of direct-observation objectives and had a positive impact on the evaluators' ability to provide feedback and assessments. The grading incentive increased the number of times a student reported direct observation by a resident preceptor. Conclusions: eCEX appears to be an effective means of enhancing student evaluation

    Puzzle based teaching versus traditional instruction in electrocardiogram interpretation for medical students – a pilot study

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Most medical professionals are expected to possess basic electrocardiogram (EKG) interpretation skills. But, published data suggests that residents' and physicians' EKG interpretation skills are suboptimal. Learning styles differ among medical students; individualization of teaching methods has been shown to be viable and may result in improved learning. Puzzles have been shown to facilitate learning in a relaxed environment. The objective of this study was to assess efficacy of teaching puzzle in EKG interpretation skills among medical students.</p> <p>Methods</p> <p>This is a reader blinded crossover trial. Third year medical students from College of Human Medicine, Michigan State University participated in this study. Two groups (n = 9) received two traditional EKG interpretation skills lectures followed by a standardized exam and two extra sessions with the teaching puzzle and a different exam. Two other groups (n = 6) received identical courses and exams with the puzzle session first followed by the traditional teaching. EKG interpretation scores on final test were used as main outcome measure.</p> <p>Results</p> <p>The average score after only traditional teaching was 4.07 ± 2.08 while after only the puzzle session was 4.04 ± 2.36 (p = 0.97). The average improvement after the traditional session was followed up with a puzzle session was 2.53 ± 1.94 while the average improvement after the puzzle session was followed with the traditional session was 2.08 ± 1.73 (p = 0.67). The final EKG exam score for this cohort (n = 15) was 84.1 compared to 86.6 (p = 0.22) for a comparable sample of medical students (n = 15) at a different campus.</p> <p>Conclusion</p> <p>Teaching EKG interpretation with puzzles is comparable to traditional teaching and may be particularly useful for certain subgroups of students. Puzzle session are more interactive and relaxing, and warrant further investigations on larger scale.</p

    Bronchopulmonary sequestration

    No full text

    æFEATURE ARTICLE

    No full text
    The implementation of a mobile problem-specific electronic CEX for assessing directly observed student patient encounter

    Are students ready for meaningful use?

    Get PDF
    Background: The meaningful use (MU) of electronic medical records (EMRs) is being implemented in three stages. Key objectives of stage one include electronic analysis of data entered into structured fields, using decision-support tools (e.g., checking drug&#x2013;drug interactions [DDI]) and electronic information exchange. Objective: The authors assessed the performance of medical students on 10 stage-one MU tasks and measured the correlation between students&#x2019; MU performance and subsequent end-of-clerkship professionalism assessments and their grades on an end-of-year objective structured clinical examination. Participants: Two-hundred and twenty-two third-year medical students on the internal medicine (IM) clerkship. Design/main measures: From July 2010 to February 2012, all students viewed 15 online tutorials covering MU competencies. The authors measured student MU documentation and performance in the chart of a virtual patient using a fully functional training EMR. Specific MU measurements included, adding: a new problem, a new medication, an advanced directive, smoking status, the results of screening tests; and performing a DDI (in which a major interaction was probable), and communicating a plan for this interaction. Key results: A total of 130 MU errors were identified. Sixty-eight (30.6%) students had at least one error, and 30 (13.5%) had more than one (range 2&#x2013;6). Of the 130 errors, 90 (69.2%) were errors in structured data entry. Errors occurred in medication dosing and instructions (18%), DDI identification (12%), documenting smoking status (15%), and colonoscopy results (23%). Students with MU errors demonstrated poorer performance on end-of-clerkship professionalism assessments (r=&#x2212;0.112, p=0.048) and lower observed structured clinical examination (OSCE) history-taking skills (r=&#x2212;0.165, p=0.008) and communication scores (r=&#x2212; 0.173, p=0.006). Conclusions: MU errors among medical students are common and correlate with subsequent poor performance in multiple educational domains. These results indicate that without assessment and feedback, a substantial minority of students may not be ready to progress to more advanced MU tasks
    corecore