5,127 research outputs found

    Inferring player experiences using facial expressions analysis

    Get PDF
    © 2014 ACM. Understanding player experiences is central to game design. Video captures of players is a common practice for obtaining rich reviewable data for analysing these experiences. However, not enough has been done in investigating ways of preprocessing the video for a more efficient analysis process. This paper consolidates and extends prior work on validating the feasibility of using automated facial expressions analysis as a natural quantitative method for evaluating player experiences. A study was performed on participants playing a first-person puzzle shooter game (Portal 2) and a social drawing trivia game (Draw My Thing), and results were shown to exhibit rich details for inferring player experiences from facial expressions. Significant correlations were also observed between facial expression intensities and self reports from the Game Experience Questionnaire. In particular, the challenge dimension consistently showed positive correlations with anger and joy. This paper eventually presents a case for increasing the application of computer vision in video analyses of gameplay

    Initial perceptions of a casual game to crowdsource facial expressions in the wild

    Full text link
    The performance of affective computing systems often depend on the quality of the image databases they are trained on. However, creating good quality training databases is a laborious activity. In this paper, we evaluate BeFaced, a tile matching casual tablet game that enables massive crowdsourcing of facial expressions for the purpose of advancing facial expression analysis. The core aspect of BeFaced is game quality, as increased enjoyment and engagement translates to an increased quantity of varied facial expressions obtained. Hence a pilot user study was performed on 18 university students whereby observational and interview data were obtained during playtests. We found that most users enjoyed the game and were intrigued by the novelty in interacting with the facial expression gameplay mechanic, but also uncovered problems with feedback provision and the dynamic difficulty adjustment mechanism. These findings hence provide invaluable insights for the other researchers/ practitioners working on similar crowdsourcing games with a purpose, as well as for the development of BeFaced

    Affective adaptation design for better gaming experiences

    Get PDF
    Affective adaptation is a creative way for game designers to add an extra layer of engagement to their productions. When player’s emotions are an explicit factor in mechanics design, endless possibilities for imaginative gameplay emerge. Whilst gaining popularity, existing affective game research mostly runs controlled experiments in restrictive settings and rely on one or more specialist devices for measuring player’s emotional state. These conditions albeit effective, are not necessarily realistic. Moreover, the simplified narrative and intrusive wearables may not be suitable for players. This exploratory study investigates delivering an immersive affective experience in the wild with minimal requirements, in an attempt for the average developer to reach the average player. A puzzle game is created with rich narrative and creative mechanics. It employs both explicit and implicit adaptation and only requires a web camera. Participants played the game on their own machines in various settings. Whilst it was rated feasible, very engaging and enjoyable, it remains questionable whether a fully immersive experience was delivered due to the limited sample size

    CGAMES'2009

    Get PDF

    A Design Exploration of Affective Gaming

    Get PDF
    Physiological sensing has been a prominent fixture in games user research (GUR) since the late 1990s, when researchers began to explore its potential to enhance and understand experience within digital game play. Since these early days, it has been widely argued that “affective gaming”—in which gameplay is influenced by a player’s emotional state—can enhance player experience by integrating physiological sensors into play. In this thesis, I conduct a design exploration of the field of affective gaming by first, systematically exploring the field and creating a framework (the affective game loop) to classify existing literature; and second by presenting two design probes, to probe and explore the design space of affective games contextualized within the affective game loop: In the Same Boat and Commons Sense. The systematic review explored this unique design space of affective gaming, opening up future avenues for exploration. The affective game loop was created as a way to classify the physiological signals and sensors most commonly used in prior literature within the context of how they are mapped into the gameplay itself. Findings suggest that the physiological input mappings can be more action-based (e.g., affecting mechanics in the game such as the movement of the character) or more context-based (e.g., affecting things like environmental or difficulty variables in the game). Findings also suggested that while the field has been around for decades, there is still yet to be any commercial successes, so does physiological interaction really heighten player experience? This question instigated the design of the two probes, exploring ways to implement these mappings and effectively heighten player experience. In the Same Boat (Design Probe One) is an embodied mirroring game designed to promote an intimate interaction, using players’ breathing rate and facial expressions to control movement of a canoe down a river. Findings suggest that playing In the Same Boat fostered the development of affiliation between the players, and that while embodied controls were less intuitive, people enjoyed them more, indicating the potential of embodied controls to foster social closeness in synchronized play over a distance. Commons Sense (Design Probe Two) is a communication modality intended to heighten audience engagement and effectively capture and communicate the audience experience, using a webcam-based heart rate detection software that takes an average of each spectator’s heart rate as input to affect in-game variables such as lighting and sound design, and game difficulty. Findings suggest that Commons Sense successfully facilitated the communication of audience response in an online entertainment context—where these social cues and signals are inherently diminished. In addition, Commons Sense is a communication modality that can both enhance a play experience while offering a novel way to communicate. Overall, findings from this design exploration shows that affective games offer a novel way to deliver a rich gameplay experience for the player

    Experience-driven procedural content generation (extended abstract)

    Get PDF
    Procedural content generation is an increasingly important area of technology within modern human-computer interaction with direct applications in digital games, the semantic web, and interface, media and software design. The personalization of experience via the modeling of the user, coupled with the appropriate adjustment of the content according to user needs and preferences are important steps towards effective and meaningful content generation. This paper introduces a framework for procedural content generation driven by computational models of user experience we name Experience-Driven Procedural Content Generation. While the framework is generic and applicable to various subareas of human computer interaction, we employ games as an indicative example of content-intensive software that enables rich forms of interaction.The research was supported, in part, by the FP7 ICT projects C2Learn (318480) and iLearnRW (318803).peer-reviewe

    Virtual environments promoting interaction

    Get PDF
    Virtual reality (VR) has been widely researched in the academic environment and is now breaking into the industry. Regular companies do not have access to this technology as a collaboration tool because these solutions usually require specific devices that are not at hand of the common user in offices. There are other collaboration platforms based on video, speech and text, but VR allows users to share the same 3D space. In this 3D space there can be added functionalities or information that in a real-world environment would not be possible, something intrinsic to VR. This dissertation has produced a 3D framework that promotes nonverbal communication. It plays a fundamental role on human interaction and is mostly based on emotion. In the academia, confusion is known to influence learning gains if it is properly managed. We designed a study to evaluate how lexical, syntactic and n-gram features influence perceived confusion and found results (not statistically significant) that point that it is possible to build a machine learning model that can predict the level of confusion based on these features. This model was used to manipulate the script of a given presentation, and user feedback shows a trend that by manipulating these features and theoretically lowering the level of confusion on text not only drops the reported confusion, as it also increases reported sense of presence. Another contribution of this dissertation comes from the intrinsic features of a 3D environment where one can carry actions that in a real world are not possible. We designed an automatic adaption lighting system that reacts to the perceived user’s engagement. This hypothesis was partially refused as the results go against what we hypothesized but do not have statistical significance. Three lines of research may stem from this dissertation. First, there can be more complex features to train the machine learning model such as syntax trees. Also, on an Intelligent Tutoring System this could adjust the avatar’s speech in real-time if fed by a real-time confusion detector. When going for a social scenario, the set of basic emotions is well-adjusted and can enrich them. Facial emotion recognition can extend this effect to the avatar’s body to fuel this synchronization and increase the sense of presence. Finally, we based this dissertation on the premise of using ubiquitous devices, but with the rapid evolution of technology we should consider that new devices will be present on offices. This opens new possibilities for other modalities.A Realidade Virtual (RV) tem sido alvo de investigação extensa na academia e tem vindo a entrar na indústria. Empresas comuns não têm acesso a esta tecnologia como uma ferramenta de colaboração porque estas soluções necessitam de dispositivos específicos que não estão disponíveis para o utilizador comum em escritório. Existem outras plataformas de colaboração baseadas em vídeo, voz e texto, mas a RV permite partilhar o mesmo espaço 3D. Neste espaço podem existir funcionalidades ou informação adicionais que no mundo real não seria possível, algo intrínseco à RV. Esta dissertação produziu uma framework 3D que promove a comunicação não-verbal que tem um papel fundamental na interação humana e é principalmente baseada em emoção. Na academia é sabido que a confusão influencia os ganhos na aprendizagem quando gerida adequadamente. Desenhámos um estudo para avaliar como as características lexicais, sintáticas e n-gramas influenciam a confusão percecionada. Construímos e testámos um modelo de aprendizagem automática que prevê o nível de confusão baseado nestas características, produzindo resultados não estatisticamente significativos que suportam esta hipótese. Este modelo foi usado para manipular o texto de uma apresentação e o feedback dos utilizadores demonstra uma tendência na diminuição do nível de confusão reportada no texto e aumento da sensação de presença. Outra contribuição vem das características intrínsecas de um ambiente 3D onde se podem executar ações que no mundo real não seriam possíveis. Desenhámos um sistema automático de iluminação adaptativa que reage ao engagement percecionado do utilizador. Os resultados não suportam o que hipotetizámos mas não têm significância estatística, pelo que esta hipótese foi parcialmente rejeitada. Três linhas de investigação podem provir desta dissertação. Primeiro, criar características mais complexas para treinar o modelo de aprendizagem, tais como árvores de sintaxe. Além disso, num Intelligent Tutoring System este modelo poderá ajustar o discurso do avatar em tempo real, alimentado por um detetor de confusão. As emoções básicas ajustam-se a um cenário social e podem enriquecê-lo. A emoção expressada facialmente pode estender este efeito ao corpo do avatar para alimentar o sincronismo social e aumentar a sensação de presença. Finalmente, baseámo-nos em dispositivos ubíquos, mas com a rápida evolução da tecnologia, podemos considerar que novos dispositivos irão estar presentes em escritórios. Isto abre possibilidades para novas modalidades
    corecore