9 research outputs found

    Invited commentary: Simulation training at a large community hospital

    Get PDF
    We recommend that those who train tomorrow’s providers increase their modeling of many of the attitudes displayed by the authors of The Initiation of Simulation Training at a Large Community Hospital (Article #9 of this issue of Proc Obstet Gynecol). Challenging low-volume, high-risk clinical situations are identified, experienced senior providers are coached using scientifically-informed educational methods with support and resources from leaders in the organization, and an attempt is made to measure progress

    Alternative Markers of Performance in Simulation: Where We Are and Where We Need To Go

    Full text link
    This article on alternative markers of performance in simulation is the product of a session held during the 2017 Academic Emergency Medicine Consensus Conference â Catalyzing System Change Through Health Care Simulation: Systems, Competency, and Outcomes.â There is a dearth of research on the use of performance markers other than checklists, holistic ratings, and behaviorally anchored rating scales in the simulation environment. Through literature review, group discussion, and consultation with experts prior to the conference, the working group defined five topics for discussion: 1) establishing a working definition for alternative markers of performance, 2) defining goals for using alternative performance markers, 3) implications for measurement when using alternative markers, identifying practical concerns related to the use of alternative performance markers, and 5) identifying potential for alternative markers of performance to validate simulation scenarios. Five research propositions also emerged and are summarized.Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/142535/1/acem13321_am.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/142535/2/acem13321.pd

    Healthcare Teams Neurodynamically Reorganize When Resolving Uncertainty

    No full text
    Research on the microscale neural dynamics of social interactions has yet to be translated into improvements in the assembly, training and evaluation of teams. This is partially due to the scale of neural involvements in team activities, spanning the millisecond oscillations in individual brains to the minutes/hours performance behaviors of the team. We have used intermediate neurodynamic representations to show that healthcare teams enter persistent (50–100 s) neurodynamic states when they encounter and resolve uncertainty while managing simulated patients. Each of the second symbols was developed situating the electroencephalogram (EEG) power of each team member in the contexts of those of other team members and the task. These representations were acquired from EEG headsets with 19 recording electrodes for each of the 1–40 Hz frequencies. Estimates of the information in each symbol stream were calculated from a 60 s moving window of Shannon entropy that was updated each second, providing a quantitative neurodynamic history of the team’s performance. Neurodynamic organizations fluctuated with the task demands with increased organization (i.e., lower entropy) occurring when the team needed to resolve uncertainty. These results show that intermediate neurodynamic representations can provide a quantitative bridge between the micro and macro scales of teamwork

    Alternative Markers of Performance in Simulation: Where We Are and Where We Need To Go.

    No full text
    This article on alternative markers of performance in simulation is the product of a session held during the 2017 Academic Emergency Medicine Consensus Conference Catalyzing System Change Through Health Care Simulation: Systems, Competency, and Outcomes. There is a dearth of research on the use of performance markers other than checklists, holistic ratings, and behaviorally anchored rating scales in the simulation environment. Through literature review, group discussion, and consultation with experts prior to the conference, the working group defined five topics for discussion: 1) establishing a working definition for alternative markers of performance, 2) defining goals for using alternative performance markers, 3) implications for measurement when using alternative markers, identifying practical concerns related to the use of alternative performance markers, and 5) identifying potential for alternative markers of performance to validate simulation scenarios. Five research propositions also emerged and are summarized
    corecore