26 research outputs found
Human factors tools for improving simulation activities in continuing medical education
Human factors (HF) is a discipline often drawn upon when there is a need to train people to perform complex, highâstakes tasks and effectively assess their performance. Complex tasks often present unique challenges for training and assessment. HF has developed specialized techniques that have been effective in overcoming several of these challenges in work settings such as aviation, process control, and the military. Many HF techniques could be applied to simulation in continuing medical education to enhance effectiveness of simulation and training, yet these techniques are not widely known by medical educators. Three HF techniques are described that could benefit health care simulation in areas of training techniques, assessment, and task design: (1) bandwidth feedback techniques for designing better feedback and task guidance, (2) dualâtask assessment techniques that can differentiate levels of expertise in tasks where performance is essentially perfect, and (3) task abstraction techniques for developing taskârelevant fidelity for simulations. Examples of each technique are given from work settings in which these principles have been applied successfully. Application of these principles to medical simulation and medical education is discussed. Adapting these techniques to health care could improve training in medical education.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/95199/1/21154_ftp.pd
Evaluation of a simulationâbased curriculum for implementing a new clinical protocol
ObjectiveTo evaluate the implementation of a new clinical protocol utilizing onâunit simulation for team training.MethodsA prospective observational study was performed at the obstetrics unit of Von Voightlander Womenâs Hospital, Michigan, USA, between October 1, 2012 to April 30, 2013. All members of the labor and delivery team were eligible for participation. Traditional education methods and inâsitu multiâdisciplinary simulations were used to educate labor and delivery staff. Following each simulation, participants responded to a survey regarding their experience. To evaluate the effect of the interventions, paging content was analyzed for mandated elements and adherence to operating room entryâtime tracking was examined.ResultsIn total, 51 unique individuals participated in 12 simulations during a 6âmonth period. Simulation was perceived as a valuable activity and paging content improved. Following the intervention, the inclusion of a goal time for reaching the operation room increased from 7% to 61% of pages and the proportion of patients entering to operating room within 10 minutes of the stated goal increased from 67% to 85%.ConclusionThe training program was well received, and the accuracy of the communication and the goal set for reaching the operating room improved.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/135397/1/ijgo333.pd
The trojan horse of the operating room: alarms and the noise of anesthesia
Dr. Jessica Pindar is a 33-year old M.D. anesthesiologist who has been practicing for 6 years in public hospitals. She now works at Burnham Memorial Hospital-a large public hospital in the inner Chicago area. Burnham Memorial has an extensive operating suite that exposes her to a wide variety of cases. Working at Burnham Memorial also exposes Jessica to quite a range of anesthesia equipment. Different components of Burnham Memorialâs operating suite have different equipment, some of which was purchased at different times in the past and still is in use; other specific types of equipment were purchased to meet the needs of the different specialties. Senior anesthesiology consultants in a given specialty, such as cardiac surgery, learn about new equipment from colleagues, conference presentations, and sales representatives, then they press Burnham Memorialâs administrators to buy new equipment in their specialty. Quite often, rather than order the doctorsâ preferred make and model of a piece of equipment, the administrators substitute a more cost-effective make and model
Residents\u27 ability to interpret radiology images: Development and improvement of an assessment tool
Rationale and Objectives: Despite increasing radiology coverage, nonradiology residents continue to preliminarily interpret basic radiologic studies independently, yet their ability to do so accurately is not routinely assessed. Materials and Methods: An online test of basic radiologic image interpretation was developed through an iterative process. Educational objectives were established, then questions and images were gathered to create an assessment. The test was administered online to first-year interns (postgraduate year [PGY] 1) from 14 different specialties, as well as a sample of third- and fourth-year radiology residents (PGY3/R2 and PGY4/R3). Results: Over a 2-year period, 368 residents were assessed, including PGY1 (n=349), PGY3/R2 (n=14), and PGY4/R3 (n=5) residents. Overall, the test discriminated effectively between interns (average score=66%) and advanced residents (R2=86%, R3=89%; P\u3c.05). Item analysis indicated discrimination indices ranging from -0.72 to 48.3 (mean=3.12, median 0.58) for individual questions, including four questions with negative discrimination indices. After removal of the negatively indexed questions, the overall predictive value of the instrument persisted and discrimination indices increased for all but one of the remaining questions (range 0.027-70.8, mean 5.76, median 0.94). Conclusions: Validation of an initial iteration of an assessment of basic image-interpretation skills led to revisions that improved the test. The results offer a specific test of radiologic reading skills with validation evidence for residents. More generally, results demonstrate a principled approach to test development. © 2014 AUR