34 research outputs found

    Multimodal Emotion Recognition for Assessment of Learning in a Game-Based Communication Skills Training

    Get PDF
    This paper describes how our FILTWAM software artifacts for face and voice emotion recognition will be used for assessing learners' progress and providing adequate feedback in an online game-based communication skills training. This constitutes an example of in-game assessment for mainly formative purposes. During this training, learners are requested to mimic specific emotions via a webcam and a microphone in which the software artifacts determine the adequacy of the mimicked emotion from either face and/or voice. Our previous studies have shown that these software artifacts are able to detect face and voice emotions in real-time and with sufficient reliability. In our current work, we present a software system architecture that unobtrusively monitors learners’ behaviors in an online game- based approach and offers timely and relevant feedback based upon learner’s face and voice expressions. Whereas emotion detection is often used for adapting learning content or learning tasks, our approach focuses on using emotions for guiding learners towards improved communication skills. Herein, learners need to have an opportunity of frequent guided practice in order to learn how to express the right emotion at the right time. We assume that this approach can address several issues with the current trainings in this area. We sketch the research design of our planned study that investigates the efficiency, effectiveness and enjoyableness of our approach. We conclude the paper by considering the challenges of this study.The Netherlands Laboratory for Lifelong Learning (NELLL) of the Open University of the Netherlands

    Data Fusion for Real-time Multimodal Emotion Recognition through Webcams and Microphones in E-Learning

    Get PDF
    The original article is available on the Taylor & Francis Online website in the following link: http://www.tandfonline.com/doi/abs/10.1080/10447318.2016.1159799?journalCode=hihc20This paper describes the validation study of our software that uses combined webcam and microphone data for real-time, continuous, unobtrusive emotion recognition as part of our FILTWAM framework. FILTWAM aims at deploying a real time multimodal emotion recognition method for providing more adequate feedback to the learners through an online communication skills training. Herein, timely feedback is needed that reflects on their shown intended emotions and which is also useful to increase learners’ awareness of their own behaviour. At least, a reliable and valid software interpretation of performed face and voice emotions is needed to warrant such adequate feedback. This validation study therefore calibrates our software. The study uses a multimodal fusion method. Twelve test persons performed computer-based tasks in which they were asked to mimic specific facial and vocal emotions. All test persons’ behaviour was recorded on video and two raters independently scored the showed emotions, which were contrasted with the software recognition outcomes. A hybrid method for multimodal fusion of our multimodal software shows accuracy between 96.1% and 98.6% for the best-chosen WEKA classifiers over predicted emotions. The software fulfils its requirements of real-time data interpretation and reliable results.The Netherlands Laboratory for Lifelong Learning (NELLL) of the Open University Netherlands

    Communication skills training exploiting multimodal emotion recognition

    Get PDF
    The teaching of communication skills is a labour-intensive task because of the detailed feedback that should be given to learners during their prolonged practice. This study investigates to what extent our FILTWAM facial and vocal emotion recognition software can be used for improving a serious game (the Communication Advisor) that delivers a web-based training of communication skills. A test group of 25 participants played the game wherein they were requested to mimic specific facial and vocal emotions. Half of the assignments included direct feedback and the other half included no feedback. It was investigated whether feedback on the mimicked emotions would lead to better learning. The results suggest the facial performance growth was found to be positive, particularly significant in the feedback condition. The vocal performance growth was significant in both conditions. The results are a significant indication that the automated feedback from the software improves learners’ communication performances.The Netherlands Laboratory for Lifelong Learning (NELLL) of the Open University Netherland

    FILTWAM - A Framework for Online Game-based Communication Skills Training - Using Webcams and Microphones for Enhancing Learner Support

    Get PDF
    Bahreini, K., Nadolski, R., Qi, W., & Westera, W. (2012). FILTWAM - A Framework for Online Game-based Communication Skills Training - Using Webcams and Microphones for Enhancing Learner Support. In P. Felicia (Ed.), The 6th European Conference on Games Based Learning - ECGBL 2012 (pp. 39-48). Cork, Ireland: University College Cork and the Waterford Institute of Technology.This paper provides an overarching framework embracing conceptual and technical frameworks for improving the online communication skills of lifelong learners. This overarching framework is called FILTWAM (Framework for Improving Learning Through Webcams And Microphones). We propose a novel web-based communication training approach, one which incorporates relevant and timely feedback based upon learner's facial expressions and verbalizations. This data is collected using webcams with their incorporated image and microphones with their sound waves, which can continuously and unobtrusively monitor and interpret learners' emotional behaviour into emotional states. The feedback generated from the webcams is expected to enhance learner’s awareness of their own behaviour as well as to improve the alignment between their expressed behaviour and intended behaviour. Our approach emphasizes communication behaviour rather than communication content, as people mostly do not have problems with the "what" but with the 'how" in expressing their message. For our design of online game-based communication skills trainings, we use insights from face-to-face training, game-based learning, lifelong learning, and affective computing. These areas constitute starting points for moving ahead the not yet well-established area of using emotional states for improved learning. Our framework and research is situated within this latter area. A self-contained game-based training enhances flexibility and scalability, in contrast with face-to-face trainings. Furthermore, game-based training better serve the interests of lifelong learners who prefer to study at their own pace, place and time. In the future we may possibly integrate the generated feedback with EMERGO, which is a game-based toolkit for delivery of multimedia cases. Finally, we will report on a small-scale proof of concept study that on the one hand exemplifies the practical application of our framework and on the other hand provides first evaluation results on that. This study will guide further development of software and training materials and inform future research. Moreover, it will validate the use of webcam data for a real-time and adequate interpretation of facial expressions into emotional states (like sadness, anger, disgust, fear, happiness, and surprise). For this purpose, participants' behaviour is also recorded on videos so that videos will be replayed, rated, annotated and evaluated by expert observers and contrasted with participants' own opinions.CELSTEC Open University of the Netherlands, NeLL

    Improved Multimodal Emotion Recognition for Better Game-Based Learning:For OULU Team from Finland, December 9, 2014, Heerlen, the Netherlands

    Get PDF
    What is this research about? What is the target group? Why this research? How to do this research? What have been done so far? Future direction

    Towards Real-time Speech Emotion Recognition for Affective E-Learning

    Get PDF
    The original article is available as an open access file on the Springer website in the following link: http://link.springer.com/article/10.1007/s10639-015-9388-2This paper presents the voice emotion recognition part of the FILTWAM framework for real-time emotion recognition in affective e-learning settings. FILTWAM (Framework for Improving Learning Through Webcams And Microphones) intends to offer timely and appropriate online feedback based upon learner’s vocal intonations and facial expressions in order to foster their learning. Whereas the facial emotion recognition part has been successfully tested in a previous study, the here presented study describes the development and testing of FILTWAM's vocal emotion recognition software artefact. The main goal of this study was to show the valid use of computer microphone data for real-time and adequate interpretation of vocal intonations into extracted emotional states. The software that was developed was tested in a study with twelve participants. All participants individually received the same computer-based tasks in which they were requested eighty times to mimic specific vocal expressions (960 occurrences in total). Each individual session was recorded on video. For the validation of the voice emotion recognition software artefact, two experts annotated and rated participants' recorded behaviours. Expert findings were then compared with the software recognition results and showed an overall accuracy of Kappa of 0.743. The overall accuracy of the voice emotion recognition software artefact is 67% based on the requested emotions and the recognized emotions. Our FILTWAM-software allows to continually and unobtrusively observing learners’ behaviours and transforms these behaviours into emotional states. This paves the way for unobtrusive and real-time capturing of learners' emotional states for enhancing adaptive e-learning approaches.The Netherlands Laboratory for Lifelong Learning (NELLL) of the Open University Netherland

    Multimodal emotion recognition as assessment for learning in a game-based communication skills training

    Get PDF
    This paper presentation describes how our FILTWAM software artifacts for face and voice emotion recognition will be used for assessing learners' progress and providing adequate feedback in an online game-based communication skills training. This constitutes an example of in-game assessment for mainly formative purposes. During this training, learners are requested to mimic specific emotions via a webcam and a microphone in which the software artifacts determine the adequacy of the mimicked emotion from either face and/or voice. Our previous studies have shown that these software artifacts are able to detect face and voice emotions in real-time and with sufficient reliability. In our current work, we present a software system architecture that unobtrusively monitors learners’ behaviors in an online game-based approach and offers timely and relevant feedback based upon learner’s face and voice expressions. Whereas emotion detection is often used for adapting learning content or learning tasks, our approach focuses on using emotions for guiding learners towards improved communication skills. Herein, learners need to have an opportunity of frequent guided practice in order to learn how to express the right emotion at the right time. We assume that this approach can address several issues with the current trainings in this area. We sketch the research design of our planned study that investigates the efficiency, effectiveness and enjoyableness of our approach. We conclude the presentation by considering the challenges of this study.We would like to thank the Netherlands Laboratory for Lifelong Learning (NELLL) of the Open University of the Netherlands, which sponsors this research

    Software Components for Serious Game Development

    Get PDF
    The presentation explains the approach of the RAGE project. It presents three examples of RAGE software components and how these can be easily reused for applied game development.This study is part of the RAGE project. The RAGE project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 644187. This publication reflects only the author's view. The European Commission is not responsible for any use that may be made of the information it contains
    corecore