16 research outputs found

    Affective Game Computing: A Survey

    Full text link
    This paper surveys the current state of the art in affective computing principles, methods and tools as applied to games. We review this emerging field, namely affective game computing, through the lens of the four core phases of the affective loop: game affect elicitation, game affect sensing, game affect detection and game affect adaptation. In addition, we provide a taxonomy of terms, methods and approaches used across the four phases of the affective game loop and situate the field within this taxonomy. We continue with a comprehensive review of available affect data collection methods with regards to gaming interfaces, sensors, annotation protocols, and available corpora. The paper concludes with a discussion on the current limitations of affective game computing and our vision for the most promising future research directions in the field

    Moment-to-moment Engagement Prediction through the Eyes of the Observer: PUBG Streaming on Twitch

    Full text link
    Is it possible to predict moment-to-moment gameplay engagement based solely on game telemetry? Can we reveal engaging moments of gameplay by observing the way the viewers of the game behave? To address these questions in this paper, we reframe the way gameplay engagement is defined and we view it, instead, through the eyes of a game's live audience. We build prediction models for viewers' engagement based on data collected from the popular battle royale game PlayerUnknown's Battlegrounds as obtained from the Twitch streaming service. In particular, we collect viewers' chat logs and in-game telemetry data from several hundred matches of five popular streamers (containing over 100,000 game events) and machine learn the mapping between gameplay and viewer chat frequency during play, using small neural network architectures. Our key findings showcase that engagement models trained solely on 40 gameplay features can reach accuracies of up to 80% on average and 84% at best. Our models are scalable and generalisable as they perform equally well within- and across-streamers, as well as across streamer play styles.Comment: Version accepted for the Conference on the Foundations of Digital Games 2020 - Malt

    A study on affect model validity : nominal vs ordinal labels

    Get PDF
    The question of representing emotion computationally remains largely unanswered: popular approaches require annotators to assign a magnitude (or a class) of some emotional dimension, while an alternative is to focus on the relationship between two or more options. Recent evidence in affective computing suggests that following a methodology of ordinal annotations and processing leads to better reliability and validity of the model. This paper compares the generality of classification methods versus preference learning methods in predicting the levels of arousal in two widely used affective datasets. Findings of this initial study further validate the hypothesis that approaching affect labels as ordinal data and building models via preference learning yields models of better validity.peer-reviewe

    Your Gameplay Says It All: Modelling Motivation in Tom Clancy's The Division

    Full text link
    Is it possible to predict the motivation of players just by observing their gameplay data? Even if so, how should we measure motivation in the first place? To address the above questions, on the one end, we collect a large dataset of gameplay data from players of the popular game Tom Clancy's The Division. On the other end, we ask them to report their levels of competence, autonomy, relatedness and presence using the Ubisoft Perceived Experience Questionnaire. After processing the survey responses in an ordinal fashion we employ preference learning methods based on support vector machines to infer the mapping between gameplay and the reported four motivation factors. Our key findings suggest that gameplay features are strong predictors of player motivation as the best obtained models reach accuracies of near certainty, from 92% up to 94% on unseen players.Comment: Version accepted for IEEE Conference on Games, 201

    Towards general models of player experience : a study within genres

    Get PDF
    This project has received funding from the EU’s Horizon 2020 programme under grant agreement No 951911, and from the University of Malta internal research grants programme Research Excellence Fund under grant agreement No 202003.To which degree can abstract gameplay metrics capture the player experience in a general fashion within a game genre? In this comprehensive study we address this question across three different videogame genres: racing, shooter, and platformer games. Using high-level gameplay features that feed preference learning models we are able to predict arousal accurately across different games of the same genre in a largescale dataset of over 1, 000 arousal-annotated play sessions. Our genre models predict changes in arousal with up to 74% accuracy on average across all genres and 86% in the best cases. We also examine the feature importance during the modelling process and find that time-related features largely contribute to the performance of both game and genre models. The prominence of these game-agnostic features show the importance of the temporal dynamics of the play experience in modelling, but also highlight some of the challenges for the future of general affect modelling in games and beyond.peer-reviewe

    Investigating gaze interaction to support children’s gameplay

    Get PDF
    Gaze interaction has become an affordable option in the development of innovative interaction methods for user input. Gaze holds great promise as an input modality, offering increased immersion and opportunities for combined interactions (e.g., gaze and mouse, touch). However, the use of gaze as an input modality to support children’s gameplay has not been examined to unveil those opportunities. To investigate the potential of gaze interaction to support children’s gameplay, we designed and developed a game that enables children to utilize gaze interaction as an input modality. Then, we performed a between subjects research design study with 28 children using mouse as an input mechanism and 29 children using their gaze (8–14 years old). During the study, we collected children’s attitudes (via self-reported questionnaire) and actual usage behavior (using facial video, physiological data and computer logs). The results show no significant difference on children’s attitudes regarding the ease of use and enjoyment of the two conditions, as well as on the scores achieved and number of sessions played. Usage data from children’s facial video and physiological data show that sadness and stress are significantly higher in the mouse condition, while joy, surprise, physiological arousal and emotional arousal are significantly higher in the gaze condition. In addition, our findings highlight the benefits of using multimodal data to reveal children’s behavior while playing the game, by complementing self-reported measures. As well, we uncover a need for more studies to examine gaze as an input mechanism.peer-reviewe

    Moment-to-moment engagement prediction through the eyes of the observer : PUBG streaming on Twitch

    Get PDF
    Is it possible to predict moment-to-moment gameplay engagement based solely on game telemetry? Can we reveal engaging moments of gameplay by observing the way the viewers of the game behave? To address these questions in this paper, we reframe the way gameplay engagement is defined and we view it, instead, through the eyes of a game’s live audience.We build prediction models for viewers’ engagement based on data collected from the popular battle royale game PlayerUnknown’s Battlegrounds as obtained from the Twitch streaming service. In particular, we collect viewers’ chat logs and in-game telemetry data from several hundred matches of five popular streamers (containing over 100, 000 game events) and machine learn the mapping between gameplay and viewer chat frequency during play, using small neural network architectures. Our key findings showcase that engagement models trained solely on 40 gameplay features can reach accuracies of up to 80% on average and 84% at best. Our models are scalable and generalisable as they perform equally well within- and across-streamers, as well as across streamer play styles.peer-reviewe

    RCEA: Real-time, Continuous Emotion Annotation for collecting precise mobile video ground truth labels

    Get PDF
    Collecting accurate and precise emotion ground truth labels for mobile video watching is essential for ensuring meaningful predictions. However, video-based emotion annotation techniques either rely on post-stimulus discrete self-reports, or allow real-time, continuous emotion annotations (RCEA) only for desktop settings. Following a user-centric approach, we designed an RCEA technique for mobile video watching, and validated its usability and reliability in a controlled, indoor (N=12) and later outdoor (N=20) study. Drawing on physiological measures, interaction logs, and subjective workload reports, we show that (1) RCEA is perceived to be usable for annotating emotions while mobile video watching, without increasing users' mental workload (2) the resulting time-variant annotations are comparable with intended emotion attributes of the video stimuli (classification error for valence: 8.3%; arousal: 25%). We contribute a validated annotation technique and associated annotation fusion method, that is suitable for collecting fine-grained emotion annotations while users watch mobile videos
    corecore