8 research outputs found
Does giving students feedback on their concept maps through an on-screen avatar or a humanoid robot make a difference?
Active or engaged learning is often seen as a way to improve students’ performance concerning STEM topics. When followingsuch a form of self-directed learning, students often need to receive feedback on their progress. Giving real-time feedback onan individual basis is usually beyond the teacher’s capacity; in digital learning environments, this opens the door for exploringautomated feedback. In the current study, a posttest only design was used to investigate the effect of providing students withdifferent forms of automated feedback while they were creating a concept map about photosynthesis in an online inquirylearning environment. Participants were high school students (N 138), divided over two experimental groups. In one group,feedback was given by a humanoid robot and in the other group via an avatar. The effects of the different feedback formswere compared for the two groups in terms of the frequency with which students consulted the feedback, concept map quality,and students’ attitudes. Results showed that the robot group consulted feedback more often than the avatar group. Moreover,the robot group had higher scores on a scale measuring enjoyment than the avatar group. Both of these differences werestatistically significant. However, the average quality of the concept maps created by both groups was simila
Data2Game: Towards an Integrated Demonstrator
The Data2Game project investigates how the efficacy of computerized training games can be enhanced by tailoring training scenarios to the individual player. The research is centered around three research innovations: (1) techniques for the automated modelling of players’ affective states, based on exhibited social signals, (2) techniques for the automated generation of in-game narratives tailored to the learning needs of the player, and (3) validated studies on the relation of the player behavior and game properties to learning performance. This paper describes the integration of the main results into a joint prototype
The effect of self-reflection on information usage and information literacy in a digital serious game
In crisis management decision-making, decision-makers have to combine (limited) situational information with their own experience. Whereas traditional, analog training of decision-making situations in crisis management costs considerable time and effort, digital serious games can be used as more accessible training environments to offer additional training moments. Another advantage is that digital games offer new didactic opportunities, such as inducing specific reflection from the trainees. This study examines the effect of self-reflection through social comparison on information usage and information literacy of players of a digital serious game for crisis management decision-making training. In an experiment, data was collected from 73 participants, 47 were eligible for further analysis, split over two conditions. Participants played two gameplay scenarios in fixed order. Participants in the experimental condition, between the two scenarios, saw a dashboard displaying their own as well as previous players’ in-game behavior up to that point. Participants in the control condition received no intervention between playing the two scenarios. Overall, participants in the experimental condition used significantly more information from more different sources, and compared to the control condition they kept taking significantly more time to decide in the second scenario. No significant between-condition differences regarding information literacy were found. Results indicate that in-game behavior can be (positively) influenced by letting players self-reflect on their own in-game behavior through social comparison. Results also suggest that the dashboard should display more specific information of players’ in-game behavior, providing guidance on what to improve, rather than simply offering a broad overview
Information Literacy Skills Assessment in Digital Crisis Management Training for the Safety Domain: Developing an Unobtrusive Method
This study aims to develop an unobtrusive assessment method for information literacy in the context of crisis management decision making in a digital serious game. The goal is to only employ in-game indicators to assess the players’ skill level on different facets of information literacy. In crisis management decision making it is crucial to combine an intuitive approach to decision making, build up by experience, with an analytical approach to decision making, taking into account contextual information about the crisis situation. Situations like these have to be trained frequently, for example by using serious games. Adaptivity can improve the effectiveness and efficiency of serious games. Unobtrusive assessment can enable game developers to make the game adapt to the players current skill level without breaking the flow of gameplay. Participants played a gameplay scenario in the Dilemma Game. Additionally, participants completed a questionnaire that was used as a validation measure for the in-game information literacy assessment. Using latent profile analyses, unobtrusive assessment models could be identified, most of which correlate significantly to the validation measure scores. Although inconsistencies in correlations between the information literacy standards, which call for broader testing of the identified unobtrusive assessment models, have been observed, the results display a good starting point for an unobtrusive assessment method and a first step in the development of an adaptive serious game for information literacy in crisis management decision making
Determining the effect of stress on analytical skills performance in digital decision games towards an unobtrusive measure of experienced stress in gameplay scenarios
This study aims to develop an unobtrusive measure for experienced stress in a digital serious gaming environment involving decision making in crisis management, using only in-game measures in a digital decision game called the Mayor Game. Research has shown that stress has an influence on a decision-maker's behavior, and also on the learning experience in training scenarios. Being able to assess unobtrusively the level of stress experienced would allow manipulation of the game so as to improve the learning experience. An experiment was conducted with two conditions, one paced and one non-paced. In the paced condition, participants were exposed to in-game changes that aimed to induce stress by creating information overload, uncertainty and time pressure. While pacing caused differences between the conditions with respect to in-game performance for analytical skills, several simple unobtrusive in-game measures were not consistent enough to serve as indicators for experienced stress. Further, physiological measurements of stress did not show significant differences between the conditions, indicating that the employed methods to induce stress did not work sufficiently. These results call for testing of more sophisticated methodologies to unobtrusively assess experienced stress in the given type of serious game
Data2Game: Towards an Integrated Demonstrator
The Data2Game project investigates how the efficacy of computerized training games can be enhanced by tailoring training scenarios to the individual player. The research is centered around three research innovations: (1) techniques for the automated modelling of players’ affective states, based on exhibited social signals, (2) techniques for the automated generation of in-game narratives tailored to the learning needs of the player, and (3) validated studies on the relation of the player behavior and game properties to learning performance. This paper describes the integration of the main results into a joint prototype