16 research outputs found

    An experimental study on the effects of a simulation game on students’ clinical cognitive skills and motivation

    Get PDF
    textabstractSimulation games are becoming increasingly popular in education, but more insight in their critical design features is needed. This study investigated the effects of fidelity of open patient cases in adjunct to an instructional e-module on students’ cognitive skills and motivation. We set up a three-group randomized post-test-only design: a control group working on an e-module; a cases group, combining the e-module with low-fidelity text-based patient cases, and a game group, combining the e-module with a high-fidelity simulation game with the same cases. Participants completed questionnaires on cognitive load and motivation. After a 4-week study period, blinded assessors rated students’ cognitive emergency care skills in two mannequin-based scenarios. In total 61 students participated and were assessed; 16 control group students, 20 cases students and 25 game students. Learning time was 2 h longer for the cases and game groups than for the control group. Acquired cognitive skills did not differ between groups. The game group experienced higher intrinsic and germane cognitive load than the cases group (p = 0.03 and 0.01) and felt more engaged (p < 0.001). Students did not profit from working on open cases (in adjunct to an e-module), which nonetheless challenged them to study longer. The e-module appeared to be very effective, while the high-fidelity game, although engaging, probably distracted students and impeded learning. Medical educators designing motivating and effective skills training for novices should align case complexity and fidelity with students’ proficiency level. The relation between case-fidelity, motivation and skills development is an important field for further study

    The game-based learning evaluation model (GEM): measuring the effectiveness of serious games using a standardised method

    No full text
    This article describes the background, design, and practical application of the game-based evaluation model (GEM). The aim of this evaluation model is to measure the effectiveness of serious games in a practical way. GEM contains the methodology and indicators to be measured in validation research. Measuring generic learning and design indicators makes it possible to apply GEM on multiple games. Results provide insight in the reasons why serious games are effective. This evidence will help serious gaming designers to improve their games. Three empirical studies based on various serious games applied in different contexts show how GEM can be practically used and how these studies have contributed to the improvement of GEM

    Comparative effectiveness of a serious game and an e-module to support patient safety knowledge and awareness

    Get PDF
    BACKGROUND: Serious games have the potential to teach complex cognitive skills in an engaging way, at relatively low costs. Their flexibility in use and scalability makes them an attractive learning tool, but more research is needed on the effectiveness of serious games compared to more traditional formats such e-modules. We investigated whether undergraduate medical students developed better knowledge and awareness and were more motivated after learning about patient-safety through a serious game than peers who studied the same topics using an e-module. METHODS: Fourth-year medical students were randomly assigned to either a serious game that included video-lectures, biofeedback exercises and patient missions (n = 32) or an e-module, that included text-based lectures on the same topics (n = 34). A third group acted as a historical control-group without extra education (n = 37). After the intervention, which took place during the clinical introduction course, before the start of the first rotation, all students completed a knowledge test, a self-efficacy test and a motivation questionnaire. During the following 10-week clinical rotation they filled out weekly questionnaires on patient-safety awareness and stress. RESULTS: The results showed patient safety knowledge had equally improved in the game group and e-module group compared to controls, who received no extra education. Average learning-time was 3 h for the game and 1 h for the e-module-group. The serious game was evaluated as more engaging; the e-module as more easy to use. During rotations, students in the three groups reported low and similar levels of patient-safety awareness and stress. Students who had treated patients successfully during game missions experienced higher self-efficacy and less stress during their rotation than students who treated patients unsuccessfully. CONCLUSIONS: Video-lectures (in a game) and text-based lectures (in an e-module) can be equally effective in developing knowledge on specific topics. Although serious games are strongly engaging for students and stimulate them to study longer, they do not necessarily result in better performance in patient safety issues

    Preparing Residents Effectively in Emergency Skills Training With a Serious Game

    Get PDF
    INTRODUCTION: Training emergency care skills is critical for patient safety but cost intensive. Serious games have been proposed as an engaging self-directed learning tool for complex skills. The objective of this study was to compare the cognitive skills and motivation of medical residents who only used a course manual as preparation for classroom training on emergency care with residents who used an additional serious game. METHODS: This was a quasi-experimental study with residents preparing for a rotation in the emergency department. The “reading” group received a course manual before classroom training; the “reading and game” group received this manual plus the game as preparation for the same training. Emergency skills were assessed before training (with residents who agreed to participate in an extra pretraining assessment), using validated competency scales and a global performance scale. We also measured motivation. RESULTS: All groups had comparable important characteristics (eg, experience with acute care). Before training, the reading and game group felt motivated to play the game and spent more self-study time (+2.5 hours) than the reading group. Game-playing residents showed higher scores on objectively measured and self-assessed clinical competencies but equal scores on the global performance scale and were equally motivated for training, compared with the reading group. After the 2-week training, no differences between groups existed. CONCLUSIONS: After preparing training with an additional serious game, residents showed improved clinical competencies, compared with residents who only studied course material. After a 2-week training, this advantage disappeared. Future research should study the retention of game effects in blended designs

    Assessing the Assessment in Emergency Care Training

    Get PDF
    <div><p>Objective</p><p>Each year over 1.5 million health care professionals attend emergency care courses. Despite high stakes for patients and extensive resources involved, little evidence exists on the quality of assessment. The aim of this study was to evaluate the validity and reliability of commonly used formats in assessing emergency care skills.</p><p>Methods</p><p>Residents were assessed at the end of a 2-week emergency course; a subgroup was videotaped. Psychometric analyses were conducted to assess the validity and inter-rater reliability of the assessment instrument, which included a checklist, a 9-item competency scale and a global performance scale.</p><p>Results</p><p>A group of 144 residents and 12 raters participated in the study; 22 residents were videotaped and re-assessed by 8 raters. The checklists showed limited validity and poor inter-rater reliability for the dimensions “correct” and “timely” (ICC = .30 and.39 resp.). The competency scale had good construct validity, consisting of a clinical and a communication subscale. The internal consistency of the (sub)scales was high (α = .93/.91/.86). The inter-rater reliability was moderate for the clinical competency subscale (.49) and the global performance scale (.50), but poor for the communication subscale (.27). A generalizability study showed that for a reliable assessment 5–13 raters are needed when using checklists, and four when using the clinical competency scale or the global performance scale.</p><p>Conclusions</p><p>This study shows poor validity and reliability for assessing emergency skills with checklists but good validity and moderate reliability with clinical competency or global performance scales. Involving more raters can improve the reliability substantially. Recommendations are made to improve this high stakes skill assessment.</p></div
    corecore