54 research outputs found

    EMOTIONS RECOGNITION IN VIDEO GAME PLAYERS USING PHYSIOLOGICAL INFORMATION

    Get PDF
    Video games are interactive software able to arouse different kinds of emotions in players. Usually, the game designer tries to define a set of game features able to enjoy, engage, and/or educate the consumers. Through the gameplay, the narrative, and the game environment, a video game is able to interact with players' intellect and emotions. Thanks to the technological developments of the last years, the gaming industry has grown to become one of the most important entertainment markets. The scientific community and private companies have put a lot of efforts on the technical aspects as well as on the interaction aspects between the players and the video game. Considering the game design, many theories have been proposed to define some guidelines to design games able to arouse specific emotions in consumers. They mainly use interviews or observations in order to deduce the goodness of their approach through qualitative data. There are some works based on empirical studies aimed at studying the emotional states directly on players, using quantitative data. However, these researches usually consider the data analysis as a classification problem involving, mainly, the game events. Our goal is to understand how the feelings, experienced by the players, can be automatically deducted, and how these emotional states can be used to improve the game quality. In order to pursue this purpose, we have measured the mental states using physiological signals in order to return a set of quantitative values used to identify the players emotions. The most common ways to identify emotions are: to use a discrete set of labels (e.g., joy, anger), or to assess them inside an n-dimensional vector space. Albeit the most natural way to describe the emotions is to represent them through their name, the latter approach provides a quantitative result that can be used to define the new game status. In this thesis, we propose a framework aimed at an automatic assessment, using physiological data, of emotions in a 2-dimensional space, structured by valence and arousal vectors. The former may vary between pleasure and displeasure, while the latter defines the level of physiological activation. As a consequence, we have considered as most effective to infer the players\u2019 mental states, the following physiological data: electrocardiography (ECG), electromyography on 5 facial muscles (Facial EMG), galvanic skins response (GSR), and respiration intensity/rate. We have recorded a video, during a set of game sessions, of the player's face and of her gameplay. To acquire the affective information, we have shown the recorded video and audio to the player, and we have asked to self-assess her/his emotional state over the entire game on the valence and arousal vectors presented above. Starting from this framework, we have conducted two sets of experiments. In the first experiment, our aim was to validate the procedure. We have collected the data of 10 participants while playing at 4 platform games. We have also analyzed the data to identify the emotion pattern of the player during the gaming sessions. The analysis has been conducted in two directions: individual analysis (to find the physiological pattern of an individual player), and collective analysis (to find the generic patterns of the sample population). The goal of the second experiment has been to create a dataset of physiological information of 33 players, and to extend the data analysis and the results provided by the pilot study. We have asked the participants to play at 2 racing games in two different environments: on a standard monitor and using a head mounted display for Virtual Reality. After we have collected the information useful to the dataset creation, we have analyzed the data focusing on individual analysis. In both analyses, the self-assessment and the physiological data have been used in order to infer the emotional state of the players in each moment of the game sessions, and to build a prediction model of players' emotions using Machine Learning techniques. Therefore, the three main contributions of this thesis are: to design a novel framework for study the emotions of video game players, to develop an open-source architecture and a set of software able to acquire the physiological signals and the affective states, to create an affective dataset using racing video games as stimuli, to understand which physiological conditions could be the most relevant in order to determine the players' emotions, and to propose a method for the real-time prediction of a player's mental state during a video game session. The results to suggest that it is possible to design a model that fits with player's characteristics, predicting her emotions. It could be an effective tool available to game designers who can introduce innovative features to their games

    Virtual Reality Games for Motor Rehabilitation

    Get PDF
    This paper presents a fuzzy logic based method to track user satisfaction without the need for devices to monitor users physiological conditions. User satisfaction is the key to any product’s acceptance; computer applications and video games provide a unique opportunity to provide a tailored environment for each user to better suit their needs. We have implemented a non-adaptive fuzzy logic model of emotion, based on the emotional component of the Fuzzy Logic Adaptive Model of Emotion (FLAME) proposed by El-Nasr, to estimate player emotion in UnrealTournament 2004. In this paper we describe the implementation of this system and present the results of one of several play tests. Our research contradicts the current literature that suggests physiological measurements are needed. We show that it is possible to use a software only method to estimate user emotion

    Exploiting physiological changes during the flow experience for assessing virtual-reality game design.

    Get PDF
    Immersive experiences are considered the principal attraction of video games. Achieving a healthy balance between the game's demands and the user's skills is a particularly challenging goal. However, it is a coveted outcome, as it gives rise to the flow experience – a mental state of deep concentration and game engagement. When this balance fractures, the player may experience considerable disinclination to continue playing, which may be a product of anxiety or boredom. Thus, being able to predict manifestations of these psychological states in video game players is essential for understanding player motivation and designing better games. To this end, we build on earlier work to evaluate flow dynamics from a physiological perspective using a custom video game. Although advancements in this area are growing, there has been little consideration given to the interpersonal characteristics that may influence the expression of the flow experience. In this thesis, two angles are introduced that remain poorly understood. First, the investigation is contextualized in the virtual reality domain, a technology that putatively amplifies affective experiences, yet is still insufficiently addressed in the flow literature. Second, a novel analysis setup is proposed, whereby the recorded physiological responses and psychometric self-ratings are combined to assess the effectiveness of our game's design in a series of experiments. The analysis workflow employed heart rate and eye blink variability, and electroencephalography (EEG) as objective assessment measures of the game's impact, and self-reports as subjective assessment measures. These inputs were submitted to a clustering method, cross-referencing the membership of the observations with self-report ratings of the players they originated from. Next, this information was used to effectively inform specialized decoders of the flow state from the physiological responses. This approach successfully enabled classifiers to operate at high accuracy rates in all our studies. Furthermore, we addressed the compression of medium-resolution EEG sensors to a minimal set required to decode flow. Overall, our findings suggest that the approaches employed in this thesis have wide applicability and potential for improving game designing practices

    Simulation of the visuo-motor processes in the tracking and interception of a tennis ball in play

    Get PDF
    In sports, one might wish to test new ideas regarding player movement, tactics, or strategy without subjecting the athletes to possibly wasteful or even harmful habit formations. If a method of simulation of the athlete can be devised, experiments might reasonably be conducted to evaluate the ideas independently of actual training or trial in the field. Simulation of a complex system generally begins with a long period of analysis. During this time there may be mathematical and programming explorations and constructions to sharpen and examine different approaches. Meetings are usually held by the participants to try to define the task and explore alternatives. Ideas are amplified, possibly discarded as not feasible, or incorporated into the system package. Gradually there evolves a tighter and more acceptable formulation using logical and mathematical expressions (Preface, p. vii

    Aerospace Medicine and Biology: A continuing bibliography with indexes, supplement 267, January 1985

    Get PDF
    This publication is a cumulative index to the abstracts contained in the Supplements 255 through 266 of Aerospace Medicine and Biology: A Continuing Bibliography. It includes seven indexes--subject, personal author, corporate source, foreign technology, contract number, report number, and accession number

    Temporal integration of loudness as a function of level

    Get PDF

    The Impact of Sound on Virtual Landscape Perception: An Empirical Evaluation of Aural-Visual Interaction for 3D Visualization

    Get PDF
    An understanding of quantitative and qualitative landscape characteristics is necessary to successfully articulate intervention or change in the landscape. In landscape planning and design 3D visualizations have been used to successfully communicate various aspects of landscape to a diverse population, though they have been shown to lag behind real-world experience in perceptual experiments. There is evidence that engaging other senses can alter the perception of 3D visualizations, which this thesis used as a departure point for the research project. Three research questions guide the investigation. The first research question is: How do fundamental elements in visualizations (i.e. terrain, vegetation and built form) interact with fundamental sound types (i.e. anthropogenic, mechanical and natural) to affect perceived realism of, and preference for, 3D landscape visualization? The research used empirical methods of a controlled experiment and statistical analysis of quantitative survey responses to examine the perceptual responses to the interaction aural and visual stimuli in St. James’s Park, London, UK. The visualizations were sourced from Google Earth, and the sounds recorded in situ, with Google Earth chosen as it is being used more frequently in landscape planning and design processes, though has received very little perceptual research focus. The second research question is: Do different user characteristics interact with combined aural-visual stimuli to alter perceived realism and preferences for 3D visualization? The final research question emerged out of the experiment design concentrating on research methodology: How effective is the Internet for aural-visual data collection compared to the laboratory setting? The results of the quantitative analysis can be summarized as follows: For research question 1 the results show that sound alters 3D visualization perception both positively and negatively, which varies by landscape element. For all visual conditions mechanical sound significantly lowers preference. For visualizations showing terrain only perceived realism and preference are significantly lowered by anthropogenic sound and significantly raised by natural sound for both realism and preference. For visualizations showing a combination of terrain with built form anthropogenic and mechanical sound significantly raises perceived realism. For visualizations showing a combination of terrain, vegetation and some built form a more complicated interaction occurs for realism, which is moderated by the amount of built form in the scene, e.g. with no buildings in the scene traffic and speech significantly lower realism ratings in similar ways while a small amount of built form visible resulted in speech significantly raising realism ratings. Preference was significantly lowered by anthropogenic and mechanical sound the most out of all three visual conditions. For research question 2 the results confirm that perception can vary for realism by gender and first language differences, and preference by age, first language, cultural and professional background and 3D familiarity. Finally for research question 3 and implications for Internet-based multisensory experiments there is strong evidence that audio hardware and experimental condition (laboratory vs. online) do not significantly alter realism and preference ratings, though larger display sizes can have a significant but very small effect on preference ratings (+/- 0.08 on a 5-point scale). The results indicate that sound significantly alters the perception of realism and preference for landscape simulated via 3D visualizations, with the congruence of aural and visual stimuli having a strong impact on both perceptual responses. The results provide important empirical evidence for future research to build upon, and raise important questions relating to authenticity of landscape experience, particularly when relying solely on visual material as visuals alone do not accurately simulate landscape experience. In addition the research confirms the cross-sensory nature of perception in virtual environments. As a result the inclusion of sound for landscape visualization and aesthetic research is concluded to be of critical importance. The research results suggest that when using sound with 3D visualizations the sound content match the visualized material, and to avoid using sounds that contain human speech unless there is a very strong reason to do so (e.g. there are humans in the visualization). The final chapter discusses opportunities for integrating sound with 3D visualizations in order to increase the perception of realism and preference in landscape planning and design processes, and concludes with areas for future research

    Cognitive Foundations for Visual Analytics

    Full text link
    • …
    corecore