37,597 research outputs found
Affective games:a multimodal classification system
Affective gaming is a relatively new field of research that exploits human emotions to influence gameplay for an enhanced player experience. Changes in player’s psychology reflect on their behaviour and physiology, hence recognition of such variation is a core element in affective games. Complementary sources of affect offer more reliable recognition, especially in contexts where one modality is partial or unavailable. As a multimodal recognition system, affect-aware games are subject to the practical difficulties met by traditional trained classifiers. In addition, inherited game-related challenges in terms of data collection and performance arise while attempting to sustain an acceptable level of immersion. Most existing scenarios employ sensors that offer limited freedom of movement resulting in less realistic experiences. Recent advances now offer technology that allows players to communicate more freely and naturally with the game, and furthermore, control it without the use of input devices. However, the affective game industry is still in its infancy and definitely needs to catch up with the current life-like level of adaptation provided by graphics and animation
Machine Understanding of Human Behavior
A widely accepted prediction is that computing will move to the background, weaving itself into the fabric of our everyday living spaces and projecting the human user into the foreground. If this prediction is to come true, then next generation computing, which we will call human computing, should be about anticipatory user interfaces that should be human-centered, built for humans based on human models. They should transcend the traditional keyboard and mouse to include natural, human-like interactive functions including understanding and emulating certain human behaviors such as affective and social signaling. This article discusses a number of components of human behavior, how they might be integrated into computers, and how far we are from realizing the front end of human computing, that is, how far are we from enabling computers to understand human behavior
Multi-modalities in classroom learning environments
This paper will present initial findings from the second phase of a Horizon 2020 funded project, Managing Affective-learning Through Intelligent Atoms and Smart Interactions (MaTHiSiS). The project focusses on the use of different multi-modalities used as part of the project in classrooms across Europe. The MaTHiSiS learning vision is to develop an integrated learning platform, with re-usable learning components which will respond to the needs of future education in primary, secondary, special education schools, vocational environments and learning beyond the classroom. The system comprises learning graphs which attach individual learning goals to the system. Each learning graph is developed from a set of smart learning atoms designed to support learners to achieve progression. Cutting edge technologies are being used to identify the affect state of learners and ultimately improve engagement of learners.
Much research identifies how learners engage with learning platforms (c.f. [1], [2], [3]). Not only do e-learning platforms have the capability to engage learners, they provide a vehicle for authentic classroom and informal learning [4] enabling ubiquitous and seamless learning [5] within a non-linear environment. When experiencing more enjoyable interaction learners become more confident and motivated to learn and become less anxious, especially those with learning disabilities or at risk of social exclusion [6], [13].
[7] identified the importance of understanding the affect state of learners who may experience emotions such as 'confusion, frustration, irritation, anger, rage, or even despair' resulting in disengaging with learning. The MaTHiSiS system will use a range of platform agents such as NAO robots and Kinects to measure multi-modalities that support the affect state: facial expression analysis and gaze estimation [8], mobile device-based emotion recognition [9], skeleton motion using depth sensors and speech recognition.
Data has been collected using multimodal learning analytics developed for the project, including annotated multimodal recordings of learners interacting with the system, facial expression data and position of the learner. In addition, interviews with teachers and learners, from mainstream education as well as learners with profound multiple learning difficulties and autism, have been carried out to measure engagement and achievement of learners. Findings from schools based in the United Kingdom, mainstream and special schools will be presented and challenges shared
Robust Modeling of Epistemic Mental States
This work identifies and advances some research challenges in the analysis of
facial features and their temporal dynamics with epistemic mental states in
dyadic conversations. Epistemic states are: Agreement, Concentration,
Thoughtful, Certain, and Interest. In this paper, we perform a number of
statistical analyses and simulations to identify the relationship between
facial features and epistemic states. Non-linear relations are found to be more
prevalent, while temporal features derived from original facial features have
demonstrated a strong correlation with intensity changes. Then, we propose a
novel prediction framework that takes facial features and their nonlinear
relation scores as input and predict different epistemic states in videos. The
prediction of epistemic states is boosted when the classification of emotion
changing regions such as rising, falling, or steady-state are incorporated with
the temporal features. The proposed predictive models can predict the epistemic
states with significantly improved accuracy: correlation coefficient (CoERR)
for Agreement is 0.827, for Concentration 0.901, for Thoughtful 0.794, for
Certain 0.854, and for Interest 0.913.Comment: Accepted for Publication in Multimedia Tools and Application, Special
Issue: Socio-Affective Technologie
- …