3,083 research outputs found

    Inferring player experiences using facial expressions analysis

    Get PDF
    © 2014 ACM. Understanding player experiences is central to game design. Video captures of players is a common practice for obtaining rich reviewable data for analysing these experiences. However, not enough has been done in investigating ways of preprocessing the video for a more efficient analysis process. This paper consolidates and extends prior work on validating the feasibility of using automated facial expressions analysis as a natural quantitative method for evaluating player experiences. A study was performed on participants playing a first-person puzzle shooter game (Portal 2) and a social drawing trivia game (Draw My Thing), and results were shown to exhibit rich details for inferring player experiences from facial expressions. Significant correlations were also observed between facial expression intensities and self reports from the Game Experience Questionnaire. In particular, the challenge dimension consistently showed positive correlations with anger and joy. This paper eventually presents a case for increasing the application of computer vision in video analyses of gameplay

    Tune in to your emotions: a robust personalized affective music player

    Get PDF
    The emotional power of music is exploited in a personalized affective music player (AMP) that selects music for mood enhancement. A biosignal approach is used to measure listeners’ personal emotional reactions to their own music as input for affective user models. Regression and kernel density estimation are applied to model the physiological changes the music elicits. Using these models, personalized music selections based on an affective goal state can be made. The AMP was validated in real-world trials over the course of several weeks. Results show that our models can cope with noisy situations and handle large inter-individual differences in the music domain. The AMP augments music listening where its techniques enable automated affect guidance. Our approach provides valuable insights for affective computing and user modeling, for which the AMP is a suitable carrier application

    Experience-driven procedural content generation (extended abstract)

    Get PDF
    Procedural content generation is an increasingly important area of technology within modern human-computer interaction with direct applications in digital games, the semantic web, and interface, media and software design. The personalization of experience via the modeling of the user, coupled with the appropriate adjustment of the content according to user needs and preferences are important steps towards effective and meaningful content generation. This paper introduces a framework for procedural content generation driven by computational models of user experience we name Experience-Driven Procedural Content Generation. While the framework is generic and applicable to various subareas of human computer interaction, we employ games as an indicative example of content-intensive software that enables rich forms of interaction.The research was supported, in part, by the FP7 ICT projects C2Learn (318480) and iLearnRW (318803).peer-reviewe

    Player Modeling

    Get PDF
    Player modeling is the study of computational models of players in games. This includes the detection, modeling, prediction and expression of human player characteristics which are manifested through cognitive, affective and behavioral patterns. This chapter introduces a holistic view of player modeling and provides a high level taxonomy and discussion of the key components of a player\u27s model. The discussion focuses on a taxonomy of approaches for constructing a player model, the available types of data for the model\u27s input and a proposed classification for the model\u27s output. The chapter provides also a brief overview of some promising applications and a discussion of the key challenges player modeling is currently facing which are linked to the input, the output and the computational model

    Identifying meaningful facial configurations during iterative prisoner’s dilemma games

    Get PDF
    The contraction and relaxation of facial muscles in humans is widely assumed to fulfil communicative and adaptive functions. However, to date most work has focussed either on individual muscle movements (action units) in isolation or on a small set of configurations commonly assumed to express “basic emotions”. As such, it is as yet unclear what information is communicated between individuals during naturalistic social interactions and how contextual cues influence facial activity occurring in these exchanges. The present study investigated whether consistent patterns of facial action units occur during dyadic iterative prisoners’ dilemma games, and what these patterns of facial activity might mean. Using exploratory and confirmatory factor analyses, we identified three distinct and consistent configurations of facial musculature change across three different datasets. These configurations were associated with specific gameplay outcomes, suggesting that they perform psychologically meaningful context-related functions. The first configuration communicated enjoyment and the second communicated affiliation and appeasement, both indicating cooperative intentions after cooperation or defection respectively. The third configuration communicated disapproval and encouraged social partners not to defect again. Future work should validate the occurrence and functionality of these facial configurations across other kinds of social interaction

    Influence of Selected Factors on a Counselor\u27s Attention Level to and Counseling Performance with a Virtual Human in a Virtual Counseling Session

    Get PDF
    Virtual humans serve as role-players in social skills training environments simulating situational face-to-face conversations. Previous research indicates that virtual humans in instructional roles can increase a learner\u27s engagement and motivation towards the training. Left unaddressed is if the learner is looking at the virtual human as one would in a human-to-human, face-to-face interaction. Using a modified version of the Emergent Leader Immersive Training Environment (ELITE-Lite), this study tracks visual attention and other behavior of 120 counselor trainees counseling a virtual human role-playing counselee. Specific study elements include: (1) the counselor\u27s level of visual attention toward the virtual counselee; (2) how changes to the counselor\u27s viewpoint may influence the counselor\u27s visual focus; and (3) how levels of the virtual human\u27s behavior may influence the counselor\u27s visual focus. Secondary considerations include aspects of learner performance, acceptance of the virtual human, and impacts of age and rank. Result highlights indicate that counselor visual attentional behavior could be separated into two phases: when the virtual human was speaking and when not speaking. When the virtual human is speaking, the counselor\u27s primary visual attention is on the counselee, but is also split toward pre-scripted responses required for the training session. During the non-speaking phase, the counselor\u27s visual focus was on pre-scripted responses required for training. Some of the other findings included that participants did not consider this to be like a conversation with a human, but they indicated acceptance of the virtual human as a partner with the training environment and they considered the simulation to be a useful experience. Additionally, the research indicates behavior may differ due to age or rank. Future study and design considerations for enhancements to social skills training environments are provided

    How accurately can other people infer your thoughts -- and does culture matter?

    Get PDF
    This research investigated how accurately people infer what others are thinking after observing a brief sample of their behaviour and whether culture/similarity is a relevant factor. Target participants (14 British and 14 Mediterraneans) were cued to think about either positive or negative events they had experienced. Subsequently, perceiver participants (16 British and 16 Mediterraneans) watched videos of the targets thinking about these things. Perceivers (both groups) were significantly accurate in judging when targets had been cued to think of something positive versus something negative, indicating notable inferential ability. Additionally, Mediterranean perceivers were better than British perceivers in making such inferences, irrespective of nationality of the targets, something that was statistically accounted for by corresponding group differences in levels of independently measured collectivism. The results point to the need for further research to investigate the possibility that being reared in a collectivist culture fosters ability in interpreting others’ behaviour

    Video summarisation: A conceptual framework and survey of the state of the art

    Get PDF
    This is the post-print (final draft post-refereeing) version of the article. Copyright @ 2007 Elsevier Inc.Video summaries provide condensed and succinct representations of the content of a video stream through a combination of still images, video segments, graphical representations and textual descriptors. This paper presents a conceptual framework for video summarisation derived from the research literature and used as a means for surveying the research literature. The framework distinguishes between video summarisation techniques (the methods used to process content from a source video stream to achieve a summarisation of that stream) and video summaries (outputs of video summarisation techniques). Video summarisation techniques are considered within three broad categories: internal (analyse information sourced directly from the video stream), external (analyse information not sourced directly from the video stream) and hybrid (analyse a combination of internal and external information). Video summaries are considered as a function of the type of content they are derived from (object, event, perception or feature based) and the functionality offered to the user for their consumption (interactive or static, personalised or generic). It is argued that video summarisation would benefit from greater incorporation of external information, particularly user based information that is unobtrusively sourced, in order to overcome longstanding challenges such as the semantic gap and providing video summaries that have greater relevance to individual users
    corecore