17 research outputs found
Affective Game Computing: A Survey
This paper surveys the current state of the art in affective computing
principles, methods and tools as applied to games. We review this emerging
field, namely affective game computing, through the lens of the four core
phases of the affective loop: game affect elicitation, game affect sensing,
game affect detection and game affect adaptation. In addition, we provide a
taxonomy of terms, methods and approaches used across the four phases of the
affective game loop and situate the field within this taxonomy. We continue
with a comprehensive review of available affect data collection methods with
regards to gaming interfaces, sensors, annotation protocols, and available
corpora. The paper concludes with a discussion on the current limitations of
affective game computing and our vision for the most promising future research
directions in the field
Moment-to-moment Engagement Prediction through the Eyes of the Observer: PUBG Streaming on Twitch
Is it possible to predict moment-to-moment gameplay engagement based solely
on game telemetry? Can we reveal engaging moments of gameplay by observing the
way the viewers of the game behave? To address these questions in this paper,
we reframe the way gameplay engagement is defined and we view it, instead,
through the eyes of a game's live audience. We build prediction models for
viewers' engagement based on data collected from the popular battle royale game
PlayerUnknown's Battlegrounds as obtained from the Twitch streaming service. In
particular, we collect viewers' chat logs and in-game telemetry data from
several hundred matches of five popular streamers (containing over 100,000 game
events) and machine learn the mapping between gameplay and viewer chat
frequency during play, using small neural network architectures. Our key
findings showcase that engagement models trained solely on 40 gameplay features
can reach accuracies of up to 80% on average and 84% at best. Our models are
scalable and generalisable as they perform equally well within- and
across-streamers, as well as across streamer play styles.Comment: Version accepted for the Conference on the Foundations of Digital
Games 2020 - Malt
A study on affect model validity : nominal vs ordinal labels
The question of representing emotion computationally remains largely unanswered: popular
approaches require annotators to assign a magnitude (or a class) of some emotional
dimension, while an alternative is to focus on the relationship between two or more options.
Recent evidence in affective computing suggests that following a methodology of ordinal
annotations and processing leads to better reliability and validity of the model. This paper
compares the generality of classification methods versus preference learning methods
in predicting the levels of arousal in two widely used affective datasets. Findings of this
initial study further validate the hypothesis that approaching affect labels as ordinal data
and building models via preference learning yields models of better validity.peer-reviewe
Your Gameplay Says It All: Modelling Motivation in Tom Clancy's The Division
Is it possible to predict the motivation of players just by observing their
gameplay data? Even if so, how should we measure motivation in the first place?
To address the above questions, on the one end, we collect a large dataset of
gameplay data from players of the popular game Tom Clancy's The Division. On
the other end, we ask them to report their levels of competence, autonomy,
relatedness and presence using the Ubisoft Perceived Experience Questionnaire.
After processing the survey responses in an ordinal fashion we employ
preference learning methods based on support vector machines to infer the
mapping between gameplay and the reported four motivation factors. Our key
findings suggest that gameplay features are strong predictors of player
motivation as the best obtained models reach accuracies of near certainty, from
92% up to 94% on unseen players.Comment: Version accepted for IEEE Conference on Games, 201
Towards general models of player experience : a study within genres
This project has received funding from the EU’s Horizon 2020 programme
under grant agreement No 951911, and from the University of Malta internal
research grants programme Research Excellence Fund under grant agreement
No 202003.To which degree can abstract gameplay metrics
capture the player experience in a general fashion within a game
genre? In this comprehensive study we address this question
across three different videogame genres: racing, shooter, and
platformer games. Using high-level gameplay features that feed
preference learning models we are able to predict arousal
accurately across different games of the same genre in a largescale dataset of over 1, 000 arousal-annotated play sessions. Our
genre models predict changes in arousal with up to 74% accuracy
on average across all genres and 86% in the best cases. We also
examine the feature importance during the modelling process
and find that time-related features largely contribute to the
performance of both game and genre models. The prominence of
these game-agnostic features show the importance of the temporal
dynamics of the play experience in modelling, but also highlight
some of the challenges for the future of general affect modelling
in games and beyond.peer-reviewe
Investigating gaze interaction to support children’s gameplay
Gaze interaction has become an affordable option in the development of innovative interaction methods for user input. Gaze holds great promise as an input modality, offering increased immersion and opportunities for combined interactions (e.g., gaze and mouse, touch). However, the use of gaze as an input modality to support children’s gameplay has not been examined to unveil those opportunities. To investigate the potential of gaze interaction to support children’s gameplay, we designed and developed a game that enables children to utilize gaze interaction as an input modality. Then, we performed a between subjects research design study with 28 children using mouse as an input mechanism and 29 children using their gaze (8–14 years old). During the study, we collected children’s attitudes (via self-reported questionnaire) and actual usage behavior (using facial video, physiological data and computer logs). The results show no significant difference on children’s attitudes regarding the ease of use and enjoyment of the two conditions, as well as on the scores achieved and number of sessions played. Usage data from children’s facial video and physiological data show that sadness and stress are significantly higher in the mouse condition, while joy, surprise, physiological arousal and emotional arousal are significantly higher in the gaze condition. In addition, our findings highlight the benefits of using multimodal data to reveal children’s behavior while playing the game, by complementing self-reported measures. As well, we uncover a need for more studies to examine gaze as an input mechanism.peer-reviewe
Moment-to-moment engagement prediction through the eyes of the observer : PUBG streaming on Twitch
Is it possible to predict moment-to-moment gameplay engagement
based solely on game telemetry? Can we reveal engaging moments
of gameplay by observing the way the viewers of the game behave?
To address these questions in this paper, we reframe the way gameplay
engagement is defined and we view it, instead, through the
eyes of a game’s live audience.We build prediction models for viewers’
engagement based on data collected from the popular battle
royale game PlayerUnknown’s Battlegrounds as obtained from the
Twitch streaming service. In particular, we collect viewers’ chat
logs and in-game telemetry data from several hundred matches of
five popular streamers (containing over 100, 000 game events) and
machine learn the mapping between gameplay and viewer chat
frequency during play, using small neural network architectures.
Our key findings showcase that engagement models trained solely
on 40 gameplay features can reach accuracies of up to 80% on average
and 84% at best. Our models are scalable and generalisable as
they perform equally well within- and across-streamers, as well as
across streamer play styles.peer-reviewe
Multiplayer tension in the wild : a Hearthstone case
Games are designed to elicit strong emotions during game play,
especially when players are competing against each other. Artificial
Intelligence applied to predict a player’s emotions has mainly been
tested on single-player experiences in low-stakes settings and shortterm interactions. How do players experience and manifest affect in
high-stakes competitions, and which modalities can capture this?
This paper reports a first experiment in this line of research, using a
competition of the video game Hearthstone where both competing
players’ game play and facial expressions were recorded over the
course of the entire match which could span up to 41 minutes. Using
two experts’ annotations of tension using a continuous video affect
annotation tool, we attempt to predict tension from the webcam
footage of the players alone. Treating both the input and the tension
output in a relative fashion, our best models reach 66.3% average
accuracy (up to 79.2% at the best fold) in the challenging leaveone-participant out cross-validation task. This initial experiment
shows a way forward for affect annotation in games “in the wild”
in high-stakes, real-world competitive settings.peer-reviewe