7 research outputs found

    Assessment of emergency medicine residents: a systematic review

    Get PDF
    Background: Competency-based medical education is becoming the new standard for residency programs, including Emergency Medicine (EM). To inform programmatic restructuring, guide resources and identify gaps in publication, we reviewed the published literature on types and frequency of resident assessment.Methods: We searched MEDLINE, EMBASE, PsycInfo and ERIC from Jan 2005 - June 2014. MeSH terms included “assessment,” “residency,” and “emergency medicine.” We included studies on EM residents reporting either of two primary outcomes: 1) assessment type and 2) assessment frequency per resident. Two reviewers screened abstracts, reviewed full text studies, and abstracted data. Reporting of assessment-related costs was a secondary outcome.Results: The search returned 879 articles; 137 articles were full-text reviewed; 73 met inclusion criteria. Half of the studies (54.8%) were pilot projects and one-quarter (26.0%) described fully implemented assessment tools/programs. Assessment tools (n=111) comprised 12 categories, most commonly: simulation-based assessments (28.8%), written exams (28.8%), and direct observation (26.0%). Median assessment frequency (n=39 studies) was twice per month/rotation (range: daily to once in residency). No studies thoroughly reported costs.Conclusion: EM resident assessment commonly uses simulation or direct observation, done once-per-rotation. Implemented assessment systems and assessment-associated costs are poorly reported. Moving forward, routine publication will facilitate transitioning to competency-based medical education

    Quality Evaluation Scores are no more Reliable than Gestalt in Evaluating the Quality of Emergency Medicine Blogs: A METRIQ Study

    No full text
    Construct: We investigated the quality of emergency medicine (EM) blogs as educational resources.\ua0Purpose: Online medical education resources such as blogs are increasingly used by EM trainees and clinicians. However, quality evaluations of these resources using gestalt are unreliable. We investigated the reliability of two previously derived quality evaluation instruments for blogs.\ua0Approach: Sixty English-language EM websites that published clinically oriented blog posts between January 1 and February 24, 2016, were identified. A random number generator selected 10 websites, and the 2 most recent clinically oriented blog posts from each site were evaluated using gestalt, the Academic Life in Emergency Medicine (ALiEM) Approved Instructional Resources (AIR) score, and the Medical Education Translational Resources: Impact and Quality (METRIQ-8) score, by a sample of medical students, EM residents, and EM attendings. Each rater evaluated all 20 blog posts with gestalt and 15 of the 20 blog posts with the ALiEM AIR and METRIQ-8 scores. Pearson's correlations were calculated between the average scores for each metric. Single-measure intraclass correlation coefficients (ICCs) evaluated the reliability of each instrument.\ua0Results: Our study included 121 medical students, 88 EM residents, and 100 EM attendings who completed ratings. The average gestalt rating of each blog post correlated strongly with the average scores for ALiEM AIR (r\ua0= .94) and METRIQ-8 (r\ua0= .91). Single-measure ICCs were fair for gestalt (0.37, IQR 0.25–0.56), ALiEM AIR (0.41, IQR 0.29–0.60) and METRIQ-8 (0.40, IQR 0.28–0.59).\ua0Conclusion: The average scores of each blog post correlated strongly with gestalt ratings. However, neither ALiEM AIR nor METRIQ-8 showed higher reliability than gestalt. Improved reliability may be possible through rater training and instrument refinement

    Individual Gestalt Is Unreliable for the Evaluation of Quality in Medical Education Blogs: A METRIQ Study

    No full text
    Open educational resources such as blogs are increasingly used for medical education. Gestalt is generally the evaluation method used for these resources; however, little information has been published on it. We aim to evaluate the reliability of gestalt in the assessment of emergency medicine blogs. We identified 60 English-language emergency medicine Web sites that posted clinically oriented blogs between January 1, 2016, and February 24, 2016. Ten Web sites were selected with a random-number generator. Medical students, emergency medicine residents, and emergency medicine attending physicians evaluated the 2 most recent clinical blog posts from each site for quality, using a 7-point Likert scale. The mean gestalt scores of each blog post were compared between groups with Pearson's correlations. Single and average measure intraclass correlation coefficients were calculated within groups. A generalizability study evaluated variance within gestalt and a decision study calculated the number of raters required to reliably (>0.8) estimate quality. One hundred twenty-one medical students, 88 residents, and 100 attending physicians (93.6% of enrolled participants) evaluated all 20 blog posts. Single-measure intraclass correlation coefficients within groups were fair to poor (0.36 to 0.40). Average-measure intraclass correlation coefficients were more reliable (0.811 to 0.840). Mean gestalt ratings by attending physicians correlated strongly with those by medical students (r=0.92) and residents (r=0.99). The generalizability coefficient was 0.91 for the complete data set. The decision study found that 42 gestalt ratings were required to reliably evaluate quality (>0.8). The mean gestalt quality ratings of blog posts between medical students, residents, and attending physicians correlate strongly, but individual ratings are unreliable. With sufficient raters, mean gestalt ratings provide a community standard for assessmen

    The revised Approved Instructional Resources score:An improved quality evaluation tool for online educational resources

    No full text
    BACKGROUND: Free Open-Access Medical education (FOAM) use among residents continues to rise. However, it often lacks quality assurance processes and residents receive little guidance on quality assessment. The Academic Life in Emergency Medicine Approved Instructional Resources tool (AAT) was created for FOAM appraisal by and for expert educators and has demonstrated validity in this context. It has yet to be evaluated in other populations.OBJECTIVES: We assessed the AAT's usability in a diverse population of practicing emergency medicine (EM) physicians, residents, and medical students; solicited feedback; and developed a revised tool.METHODS: As part of the Medical Education Translational Resources: Impact and Quality (METRIQ) study, we recruited medical students, EM residents, and EM attendings to evaluate five FOAM posts with the AAT and provide quantitative and qualitative feedback via an online survey. Two independent analysts performed a qualitative thematic analysis with discrepancies resolved through discussion and negotiated consensus. This analysis informed development of an initial revised AAT, which was then further refined after pilot testing among the author group. The final tool was reassessed for reliability.RESULTS: Of 330 recruited international participants, 309 completed all ratings. The Best Evidence in Emergency Medicine (BEEM) score was the component most frequently reported as difficult to use. Several themes emerged from the qualitative analysis: for ease of use-understandable, logically structured, concise, and aligned with educational value. Limitations include deviation from questionnaire best practices, validity concerns, and challenges assessing evidence-based medicine. Themes supporting its use include evaluative utility and usability. The author group pilot tested the initial revised AAT, revealing a total score average measure intraclass correlation coefficient (ICC) of moderate reliability (ICC = 0.68, 95% confidence interval [CI] = 0 to 0.962). The final AAT's average measure ICC was 0.88 (95% CI = 0.77 to 0.95).CONCLUSIONS: We developed the final revised AAT from usability feedback. The new score has significantly increased usability, but will need to be reassessed for reliability in a broad population.</p

    The Social Media Index as an Indicator of Quality for Emergency Medicine Blogs: A METRIQ Study

    No full text
    Study objective: Online educational resources such as blogs are increasingly used for education by emergency medicine clinicians. The Social Media Index was developed to quantify their relative impact. The Medical Education Translational Resources: Indicators of Quality (METRIQ) study was conducted in part to determine the association between the Social Media Index score and quality as measured by gestalt and previously derived quality instruments. Methods: Ten blogs were randomly selected from a list of emergency medicine and critical care Web sites. The 2 most recent clinically oriented blog posts published on these blogs were evaluated with gestalt, the Academic Life in Emergency Medicine Approved Instructional Resources (ALiEM AIR) score, and the METRIQ-8 score. Volunteer raters (including medical students, emergency medicine residents, and emergency medicine attending physicians) were identified with a multimodal recruitment methodology. The Social Media Index was calculated in February 2016, November 2016, April 2017, and December 2017. Pearson's correlations were calculated between the Social Media Index and the average rater gestalt, ALiEM AIR score, and METRIQ-8 score. Results: A total of 309 of 330 raters completed all ratings (93.6%). The Social Media Index correlated moderately to strongly with the mean rater gestalt ratings (range 0.69 to 0.76) and moderately with the mean rater ALiEM AIR score (range 0.55 to 0.61) and METRIQ-8 score (range 0.53 to 0.57) during the month of the blog post's selection and for 2 years after. Conclusion: The Social Media Index's correlation with multiple quality evaluation instruments over time supports the hypothesis that it is associated with overall Web site quality. It can play a role in guiding individuals to high-quality resources that can be reviewed with critical appraisal techniques
    corecore