12 research outputs found

    The Impact of Electronic Data to Capture Qualitative Comments in a Competency-Based Assessment System

    No full text
    Introduction Digitalizing workplace-based assessments (WBA) holds the potential for facilitating feedback and performance review, wherein we can easily record, store, and analyze data in real time. When digitizing assessment systems, however, it is unclear what is gained and lost in the message as a result of the change in medium. This study evaluates the quality of comments generated in paper vs. electronic media and the influence of an assessor's seniority. Methods Using a realist evaluation framework, a retrospective database review was conducted with paper-based and electronic medium comments. A sample of assessments was examined to determine any influence of the medium on the word count and the Quality of Assessment for Learning (QuAL) score. A correlation analysis evaluated the relationship between word count and QuAL score. Separate univariate analyses of variance (ANOVAs) were used to examine the influence of the assessor's seniority and medium on word count, QuAL score, and WBA scores. Results The analysis included a total of 1,825 records. The average word count for the electronic comments (M=16) was significantly higher than the paper version (M=12; p=0.01). Longer comments positively correlated with QuAL score (r=0.2). Paper-based comments received lower QuAL scores (0.41) compared to electronic (0.51; p0.01). Years in practice was negatively correlated with QuAL score (r=-0.08; p0.001) as was word count (r=-0.2; p0.001). Conclusion Digitization of WBAs increased the length of comments and did not appear to jeopardize the quality of WBAs; these results indicate higher-quality assessment data. True digital transformation may be possible by harnessing trainee data repositories and repurposing them to analyze for faculty-relevant metrics

    Assessment of Entrustable Professional Activities Using a Web-Based Simulation Platform During Transition to Emergency Medicine Residency: Mixed Methods Pilot Study

    No full text
    BackgroundThe 13 core entrustable professional activities (EPAs) are key competency-based learning outcomes in the transition from undergraduate to graduate medical education in the United States. Five of these EPAs (EPA2: prioritizing differentials, EPA3: recommending and interpreting tests, EPA4: entering orders and prescriptions, EPA5: documenting clinical encounters, and EPA10: recognizing urgent and emergent conditions) are uniquely suited for web-based assessment. ObjectiveIn this pilot study, we created cases on a web-based simulation platform for the diagnostic assessment of these EPAs and examined the feasibility and acceptability of the platform. MethodsFour simulation cases underwent 3 rounds of consensus panels and pilot testing. Incoming emergency medicine interns (N=15) completed all cases. A maximum of 4 “look for” statements, which encompassed specific EPAs, were generated for each participant: (1) performing harmful or missing actions, (2) narrowing differential or wrong final diagnosis, (3) errors in documentation, and (4) lack of recognition and stabilization of urgent diagnoses. Finally, we interviewed a sample of interns (n=5) and residency leadership (n=5) and analyzed the responses using thematic analysis. ResultsAll participants had at least one missing critical action, and 40% (6/15) of the participants performed at least one harmful action across all 4 cases. The final diagnosis was not included in the differential diagnosis in more than half of the assessments (8/15, 54%). Other errors included selecting incorrect documentation passages (6/15, 40%) and indiscriminately applying oxygen (9/15, 60%). The interview themes included psychological safety of the interface, ability to assess learning, and fidelity of cases. The most valuable feature cited was the ability to place orders in a realistic electronic medical record interface. ConclusionsThis study demonstrates the feasibility and acceptability of a web-based platform for diagnostic assessment of specific EPAs. The approach rapidly identifies potential areas of concern for incoming interns using an asynchronous format, provides feedback in a manner appreciated by residency leadership, and informs individualized learning plans

    Defining and Adopting Clinical Performance Measures in Graduate Medical Education: Where Are We Now and Where Are We Going?

    Get PDF
    Assessment and evaluation of trainees' clinical performance measures is needed to ensure safe, high-quality patient care. These measures also aid in the development of reflective, high-performing clinicians and hold graduate medical education (GME) accountable to the public. Although clinical performance measures hold great potential, challenges of defining, extracting, and measuring clinical performance in this way hinder their use for educational and quality improvement purposes. This article provides a way forward by identifying and articulating how clinical performance measures can be used to enhance GME by linking educational objectives with relevant clinical outcomes. The authors explore four key challenges: defining as well as measuring clinical performance measures, using electronic health record and clinical registry data to capture clinical performance, and bridging silos of medical education and health care quality improvement. The authors also propose solutions to showcase the value of clinical performance measures and conclude with a research and implementation agenda. Developing a common taxonomy of uniform specialty-specific clinical performance measures, linking these measures to large-scale GME databases, and applying both quantitative and qualitative methods to create a rich understanding of how GME affects quality of care and patient outcomes is important, the authors argue. The focus of this article is primarily GME, yet similar challenges and solutions will be applicable to other areas of medical and health professions education as well

    Effectiveness, safety, and efficiency of a drive‐through care model as a response to the COVID‐19 testing demand in the United States

    No full text
    Abstract Objectives Here we report the clinical performance of COVID‐19 curbside screening with triage to a drive‐through care pathway versus main emergency department (ED) care for ambulatory COVID‐19 testing during a pandemic. Patients were evaluated from cars to prevent the demand for testing from spreading COVID‐19 within the hospital. Methods We examined the effectiveness of curbside screening to identify patients who would be tested during evaluation, patient flow from screening to care team evaluation and testing, and safety of drive‐through care as 7‐day ED revisits and 14‐day hospital admissions. We also compared main ED efficiency versus drive‐through care using ED length of stay (EDLOS). Standardized mean differences (SMD) >0.20 identify statistical significance. Results Of 5931 ED patients seen, 2788 (47.0%) were walk‐in patients. Of these patients, 1111 (39.8%) screened positive for potential COVID symptoms, of whom 708 (63.7%) were triaged to drive‐through care (with 96.3% tested), and 403 (36.3%) triaged to the main ED (with 90.5% tested). The 1677 (60.2%) patients who screened negative were seen in the main ED, with 440 (26.2%) tested. Curbside screening sensitivity and specificity for predicting who ultimately received testing were 70.3% and 94.5%. Compared to the main ED, drive‐through patients had fewer 7‐day ED revisits (3.8% vs 12.5%, SMD = 0.321), fewer 14‐day hospital readmissions (4.5% vs 15.6%, SMD = 0.37), and shorter EDLOS (0.56 vs 5.12 hours, SMD = 1.48). Conclusion Curbside screening had high sensitivity, permitting early respiratory isolation precautions for most patients tested. Low ED revisit, hospital readmissions, and EDLOS suggest drive‐through care, with appropriate screening, is safe and efficient for future respiratory illness pandemics

    Quality Evaluation Scores are no more Reliable than Gestalt in Evaluating the Quality of Emergency Medicine Blogs: A METRIQ Study

    No full text
    Construct: We investigated the quality of emergency medicine (EM) blogs as educational resources.\ua0Purpose: Online medical education resources such as blogs are increasingly used by EM trainees and clinicians. However, quality evaluations of these resources using gestalt are unreliable. We investigated the reliability of two previously derived quality evaluation instruments for blogs.\ua0Approach: Sixty English-language EM websites that published clinically oriented blog posts between January 1 and February 24, 2016, were identified. A random number generator selected 10 websites, and the 2 most recent clinically oriented blog posts from each site were evaluated using gestalt, the Academic Life in Emergency Medicine (ALiEM) Approved Instructional Resources (AIR) score, and the Medical Education Translational Resources: Impact and Quality (METRIQ-8) score, by a sample of medical students, EM residents, and EM attendings. Each rater evaluated all 20 blog posts with gestalt and 15 of the 20 blog posts with the ALiEM AIR and METRIQ-8 scores. Pearson's correlations were calculated between the average scores for each metric. Single-measure intraclass correlation coefficients (ICCs) evaluated the reliability of each instrument.\ua0Results: Our study included 121 medical students, 88 EM residents, and 100 EM attendings who completed ratings. The average gestalt rating of each blog post correlated strongly with the average scores for ALiEM AIR (r\ua0= .94) and METRIQ-8 (r\ua0= .91). Single-measure ICCs were fair for gestalt (0.37, IQR 0.25–0.56), ALiEM AIR (0.41, IQR 0.29–0.60) and METRIQ-8 (0.40, IQR 0.28–0.59).\ua0Conclusion: The average scores of each blog post correlated strongly with gestalt ratings. However, neither ALiEM AIR nor METRIQ-8 showed higher reliability than gestalt. Improved reliability may be possible through rater training and instrument refinement

    Using Resident-Sensitive Quality Measures Derived From Electronic Health Record Data to Assess Residents' Performance in Pediatric Emergency Medicine

    No full text
    PURPOSE: Traditional quality metrics do not adequately represent the clinical work done by residents and, thus, cannot be used to link residency training to health care quality. This study aimed to determine whether electronic health record (EHR) data can be used to meaningfully assess residents' clinical performance in pediatric emergency medicine using resident-sensitive quality measures (RSQMs). METHOD: EHR data for asthma and bronchiolitis RSQMs from Cincinnati Children's Hospital Medical Center, a quaternary children's hospital, between July 1, 2017, and June 30, 2019, were analyzed by ranking residents based on composite scores calculated using raw, unadjusted, and case-mix adjusted latent score models, with lower percentiles indicating a lower quality of care and performance. Reliability and associations between the scores produced by the 3 scoring models were compared. Resident and patient characteristics associated with performance in the highest and lowest tertiles and changes in residents' rank after case-mix adjustments were also identified. RESULTS: 274 residents and 1,963 individual encounters of bronchiolitis patients aged 0-1 as well as 270 residents and 1,752 individual encounters of asthmatic patients aged 2-21 were included in the analysis. The minimum reliability requirement to create a composite score was met for asthma data (α = 0.77), but not bronchiolitis (α = 0.17). The asthma composite scores showed high correlations (r = 0.90-0.99) between raw, latent, and adjusted composite scores. After case-mix adjustments, residents' absolute percentile rank shifted on average 10 percentiles. Residents who dropped by 10 or more percentiles were likely to be more junior, saw fewer patients, cared for less acute and younger patients, or had patients with a longer emergency department stay. CONCLUSIONS: For some clinical areas, it is possible to use EHR data, adjusted for patient complexity, to meaningfully assess residents' clinical performance and identify opportunities for quality improvement
    corecore