9 research outputs found

    First steps towards international competency goals for residency training: a qualitative comparison of 3 regional standards in anesthesiology

    No full text
    Abstract Background Competency-based medical education (CBME) has revolutionized approaches to training by making expectations more concrete, visible, and relevant for trainees. Designing, applying, and updating CBME requirements challenges residency programs, which must address many aspects of training simultaneously. This challenge also exists for educational regulatory bodies in creating and adjusting national competencies to standardize training expectations. We propose that an international approach for mapping residency training requirements may provide a baseline for assessing commonalities and differences. This approach allows us to take our first steps towards creating international competency goals to enhance sharing of best practices in education and clinical work. Methods We chose anesthesiology residency training as our example discipline. Using two rounds of content analysis, we qualitatively compared published anesthesiology residency competencies for the European Union (The European Training Requirement), United States (ACGME Milestones), and Canada (CanMEDS Competence By Design), focusing on similarities and differences in representation (round one) and emphasis (round two) to generate hypotheses on practical solutions regarding international educational standards. Results We mapped the similarities and discrepancies between the three repositories. Round one revealed that 93% of competencies were common between the three repositories. Major differences between European Training Requirement, US Milestones, and Competence by Design competencies involved critical emergency medicine. Round two showed that over 30% of competencies were emphasized equally, with notable exceptions that European Training Requirement emphasized Anaesthesia Non-Technical Skills, Competence by Design highlighted more granular competencies within specific anesthesiology situations, and US Milestones emphasized professionalism and behavioral practices. Conclusions This qualitative comparison has identified commonalities and differences in anesthesiology training which may facilitate sharing broader perspectives on diverse high-quality educational, clinical, and research practices to enhance innovative approaches. Determining these overlaps in residency training can prompt international educational societies responsible for creating competencies to collaborate to design future training programs. This approach may be considered as a feasible method to build an international core of residency competency requirements for other disciplines

    Strategy to Develop a Common Simulation Training Program: Illustration with Anesthesia and Intensive Care Residency in France.

    Full text link
    peer reviewedPhenomenon: The urgency of having fair and trustworthy competency-based assessment in medical training is growing. Simulation is increasingly recognized as a potent method for building and assessing applied competencies. The growing use of simulation and its application in summative assessment calls for comprehensive and rigorously designed programs. Defining the current baseline of what is available and feasible is a crucial first step. This paper uses anesthesia and intensive care (AIC) in France as a case study in how to document this baseline. Approach: An IRB-approved, online anonymous closed survey was submitted to AIC residency program directors and AIC simulation program directors in France from January to February 2021. The researcher-developed survey consisted of 65 questions across five sections: centers' characteristics, curricular characteristics, courses' characteristics, instructors' characteristics, and simulation perceptions and perspectives. Findings: The participation rate was 31/31 (100%) with 29 centers affiliated with a university hospital. All centers had AIC simulation activities. Resident training was structured in 94% of centers. Simulation uses were training (100%), research and development (61%), procedural or organizational testing (42%), and summative assessment (13%). Interprofessional full-scale simulation training existed in 90% of centers. Procedural training on simulators prior to clinical patients' care was performed "always" in 16%, "most often" in 45%, "sometimes" in 29% and "rarely" or "not" in 10% of centers. Simulated patients were used in 61% of centers. Main themes were identified for procedural skills, full-scale and simulated patient simulation training. Simulation activity was perceived as increasing in 68% of centers. Centers expressed a desire to participate in developing and using a national common AIC simulation program. Insights: Based on our findings in AIC, we demonstrated a baseline description of nationwide simulation activities. We now have a clearer perspective on a decentralized approach in which individual institutions or regional consortia conduct simulation for a discipline in a relatively homogeneous way, suggesting the feasibility for national guidelines. This approach provides useful clues for AIC and other disciplines to develop a comprehensive and meaningful program matching existing expectations and closing the identified gaps

    sj-docx-1-mde-10.1177_23821205241229778 - Supplemental material for Validating Parallel-Forms Tests for Assessing Anesthesia Resident Knowledge

    No full text
    Supplemental material, sj-docx-1-mde-10.1177_23821205241229778 for Validating Parallel-Forms Tests for Assessing Anesthesia Resident Knowledge by Allison J. Lee, Stephanie R. Goodman, Melissa E. B. Bauer, Rebecca D. Minehart, Shawn Banks, Yi Chen, Ruth L. Landau and Madhabi Chatterji in Journal of Medical Education and Curricular Development</p

    sj-doc-2-mde-10.1177_23821205241229778 - Supplemental material for Validating Parallel-Forms Tests for Assessing Anesthesia Resident Knowledge

    No full text
    Supplemental material, sj-doc-2-mde-10.1177_23821205241229778 for Validating Parallel-Forms Tests for Assessing Anesthesia Resident Knowledge by Allison J. Lee, Stephanie R. Goodman, Melissa E. B. Bauer, Rebecca D. Minehart, Shawn Banks, Yi Chen, Ruth L. Landau and Madhabi Chatterji in Journal of Medical Education and Curricular Development</p

    Simulation-based summative assessment in healthcare: an overview of key principles for practice.

    Full text link
    peer reviewed[en] BACKGROUND: Healthcare curricula need summative assessments relevant to and representative of clinical situations to best select and train learners. Simulation provides multiple benefits with a growing literature base proving its utility for training in a formative context. Advancing to the next step, "the use of simulation for summative assessment" requires rigorous and evidence-based development because any summative assessment is high stakes for participants, trainers, and programs. The first step of this process is to identify the baseline from which we can start. METHODS: First, using a modified nominal group technique, a task force of 34 panelists defined topics to clarify the why, how, what, when, and who for using simulation-based summative assessment (SBSA). Second, each topic was explored by a group of panelists based on state-of-the-art literature reviews technique with a snowball method to identify further references. Our goal was to identify current knowledge and potential recommendations for future directions. Results were cross-checked among groups and reviewed by an independent expert committee. RESULTS: Seven topics were selected by the task force: "What can be assessed in simulation?", "Assessment tools for SBSA", "Consequences of undergoing the SBSA process", "Scenarios for SBSA", "Debriefing, video, and research for SBSA", "Trainers for SBSA", and "Implementation of SBSA in healthcare". Together, these seven explorations provide an overview of what is known and can be done with relative certainty, and what is unknown and probably needs further investigation. Based on this work, we highlighted the trustworthiness of different summative assessment-related conclusions, the remaining important problems and questions, and their consequences for participants and institutions of how SBSA is conducted. CONCLUSION: Our results identified among the seven topics one area with robust evidence in the literature ("What can be assessed in simulation?"), three areas with evidence that require guidance by expert opinion ("Assessment tools for SBSA", "Scenarios for SBSA", "Implementation of SBSA in healthcare"), and three areas with weak or emerging evidence ("Consequences of undergoing the SBSA process", "Debriefing for SBSA", "Trainers for SBSA"). Using SBSA holds much promise, with increasing demand for this application. Due to the important stakes involved, it must be rigorously conducted and supervised. Guidelines for good practice should be formalized to help with conduct and implementation. We believe this baseline can direct future investigation and the development of guidelines
    corecore