13,155 research outputs found

    Learning emotions in virtual environments

    Get PDF
    A modular hybrid neural network architecture, called SHAME, for emotion learning is introduced. The system learns from annotated data how the emotional state is generated and changes due to internal and external stimuli. Part of the modular architecture is domain independent and part must be\ud adapted to the domain under consideration.\ud The generation and learning of emotions is based on the event appraisal model.\ud The architecture is implemented in a prototype consisting of agents trying to survive in a virtual world. An evaluation of this prototype shows that the architecture is capable of\ud generating natural emotions and furthermore that training of the neural network modules in the architecture is computationally feasible.\ud Keywords: hybrid neural systems, emotions, learning, agents

    Virtual Reality Games for Motor Rehabilitation

    Get PDF
    This paper presents a fuzzy logic based method to track user satisfaction without the need for devices to monitor users physiological conditions. User satisfaction is the key to any productā€™s acceptance; computer applications and video games provide a unique opportunity to provide a tailored environment for each user to better suit their needs. We have implemented a non-adaptive fuzzy logic model of emotion, based on the emotional component of the Fuzzy Logic Adaptive Model of Emotion (FLAME) proposed by El-Nasr, to estimate player emotion in UnrealTournament 2004. In this paper we describe the implementation of this system and present the results of one of several play tests. Our research contradicts the current literature that suggests physiological measurements are needed. We show that it is possible to use a software only method to estimate user emotion

    Narrative Generation in Entertainment: Using Artificial Intelligence Planning

    Get PDF
    From the field of artificial intelligence (AI) there is a growing stream of technology capable of being embedded in software that will reshape the way we interact with our environment in our everyday lives. This ā€˜AI softwareā€™ is often used to tackle more mundane tasks that are otherwise dangerous or meticulous for a human to accomplish. One particular area, explored in this paper, is for AI software to assist in supporting the enjoyable aspects of the lives of humans. Entertainment is one of these aspects, and often includes storytelling in some form no matter what the type of media, including television, films, video games, etc. This paper aims to explore the ability of AI software to automate the story-creation and story-telling process. This is part of the field of Automatic Narrative Generator (ANG), which aims to produce intuitive interfaces to support people (without any previous programming experience) to use tools to generate stories, based on their ideas of the kind of characters, intentions, events and spaces they want to be in the story. The paper includes details of such AI software created by the author that can be downloaded and used by the reader for this purpose. Applications of this kind of technology include the automatic generation of story lines for ā€˜soap operasā€™

    A Trip to the Moon: Personalized Animated Movies for Self-reflection

    Full text link
    Self-tracking physiological and psychological data poses the challenge of presentation and interpretation. Insightful narratives for self-tracking data can motivate the user towards constructive self-reflection. One powerful form of narrative that engages audience across various culture and age groups is animated movies. We collected a week of self-reported mood and behavior data from each user and created in Unity a personalized animation based on their data. We evaluated the impact of their video in a randomized control trial with a non-personalized animated video as control. We found that personalized videos tend to be more emotionally engaging, encouraging greater and lengthier writing that indicated self-reflection about moods and behaviors, compared to non-personalized control videos
    • ā€¦
    corecore