2 research outputs found

    Using Twitter to increase content dissemination and control educational content with Presenter Initiated and Generated Live Educational Tweets (PIGLETs)

    No full text
    <p>Live-tweeting during educational presentations is typically learner-generated and can lead to misquoted information. Presenter curated tweets have not been well described. We created Presenter Initiated and Generated Live Educational Tweets (PIGLETs) with the goal to broaden the reach of educational conferences. We hypothesized that using PIGLETs would increase the reach and exposure of our material. We developed a prospective single-arm intervention study performed during the “Not Another Boring Lecture” workshops presented at two national conferences in 2015. Presenters tweeted PIGLETs linked to unique hashtags #NotAnotherBoringLecture and #InnovateMedEd. Analytic software was used to measure the following outcomes: (1) number of tweets published by presenters versus learners, (2) reach (users exposed to content containing the hashtag), and (3) exposure (total number of times content was delivered). One hundred and twenty-six participants attended the workshops. A total of 636 tweets (including retweets) were sent by presenters containing the study hashtags, compared with 162 sent by learners. #NotAnotherBoringLecture reached 47,200 users and generated 136,400 impressions; #InnovateMedEd reached 36,400 users and generated 79,100 impressions. PIGLETs allowed presenters to reach a significant number of learners, as well as control the content delivered through Twitter. PIGLETs can be used to augment educational sessions beyond the physical confines of the classroom.</p

    A scoping review of artificial intelligence in medical education: BEME Guide No. 84

    No full text
    Artificial Intelligence (AI) is rapidly transforming healthcare, and there is a critical need for a nuanced understanding of how AI is reshaping teaching, learning, and educational practice in medical education. This review aimed to map the literature regarding AI applications in medical education, core areas of findings, potential candidates for formal systematic review and gaps for future research. This rapid scoping review, conducted over 16 weeks, employed Arksey and O’Malley’s framework and adhered to STORIES and BEME guidelines. A systematic and comprehensive search across PubMed/MEDLINE, EMBASE, and MedEdPublish was conducted without date or language restrictions. Publications included in the review spanned undergraduate, graduate, and continuing medical education, encompassing both original studies and perspective pieces. Data were charted by multiple author pairs and synthesized into various thematic maps and charts, ensuring a broad and detailed representation of the current landscape. The review synthesized 278 publications, with a majority (68%) from North American and European regions. The studies covered diverse AI applications in medical education, such as AI for admissions, teaching, assessment, and clinical reasoning. The review highlighted AI's varied roles, from augmenting traditional educational methods to introducing innovative practices, and underscores the urgent need for ethical guidelines in AI's application in medical education. The current literature has been charted. The findings underscore the need for ongoing research to explore uncharted areas and address potential risks associated with AI use in medical education. This work serves as a foundational resource for educators, policymakers, and researchers in navigating AI's evolving role in medical education. A framework to support future high utility reporting is proposed, the FACETS framework.</p
    corecore