38,703 research outputs found
Learning emotions in virtual environments
A modular hybrid neural network architecture, called SHAME, for emotion learning is introduced. The system learns from annotated data how the emotional state is generated and changes due to internal and external stimuli. Part of the modular architecture is domain independent and part must be\ud
adapted to the domain under consideration.\ud
The generation and learning of emotions is based on the event appraisal model.\ud
The architecture is implemented in a prototype consisting of agents trying to survive in a virtual world. An evaluation of this prototype shows that the architecture is capable of\ud
generating natural emotions and furthermore that training of the neural network modules in the architecture is computationally feasible.\ud
Keywords: hybrid neural systems, emotions, learning, agents
Agents for educational games and simulations
This book consists mainly of revised papers that were presented at the Agents for Educational Games and Simulation (AEGS) workshop held on May 2, 2011, as part of the Autonomous Agents and MultiAgent Systems (AAMAS) conference in Taipei, Taiwan. The 12 full papers presented were carefully reviewed and selected from various submissions. The papers are organized topical sections on middleware applications, dialogues and learning, adaption and convergence, and agent applications
Exploring the Affective Loop
Research in psychology and neurology shows that both body and mind are
involved when experiencing emotions (Damasio 1994, Davidson et al.
2003). People are also very physical when they try to communicate their
emotions. Somewhere in between beings consciously and unconsciously
aware of it ourselves, we produce both verbal and physical signs to make
other people understand how we feel. Simultaneously, this production of
signs involves us in a stronger personal experience of the emotions we
express.
Emotions are also communicated in the digital world, but there is little
focus on users' personal as well as physical experience of emotions in
the available digital media. In order to explore whether and how we can
expand existing media, we have designed, implemented and evaluated
/eMoto/, a mobile service for sending affective messages to others. With
eMoto, we explicitly aim to address both cognitive and physical
experiences of human emotions. Through combining affective gestures for
input with affective expressions that make use of colors, shapes and
animations for the background of messages, the interaction "pulls" the
user into an /affective loop/. In this thesis we define what we mean by
affective loop and present a user-centered design approach expressed
through four design principles inspired by previous work within Human
Computer Interaction (HCI) but adjusted to our purposes; /embodiment/
(Dourish 2001) as a means to address how people communicate emotions in
real life, /flow/ (Csikszentmihalyi 1990) to reach a state of
involvement that goes further than the current context, /ambiguity/ of
the designed expressions (Gaver et al. 2003) to allow for open-ended
interpretation by the end-users instead of simplistic, one-emotion
one-expression pairs and /natural but designed expressions/ to address
people's natural couplings between cognitively and physically
experienced emotions. We also present results from an end-user study of
eMoto that indicates that subjects got both physically and emotionally
involved in the interaction and that the designed "openness" and
ambiguity of the expressions, was appreciated and understood by our
subjects. Through the user study, we identified four potential design
problems that have to be tackled in order to achieve an affective loop
effect; the extent to which users' /feel in control/ of the interaction,
/harmony and coherence/ between cognitive and physical expressions/,/
/timing/ of expressions and feedback in a communicational setting, and
effects of users' /personality/ on their emotional expressions and
experiences of the interaction
Towards Learning âSelfâ and Emotional Knowledge in Social and Cultural Human-Agent Interactions
Original article can be found at: http://www.igi-global.com/articles/details.asp?ID=35052 Copyright IGI. Posted by permission of the publisher.This article presents research towards the development of a virtual learning environment (VLE) inhabited by intelligent virtual agents (IVAs) and modeling a scenario of inter-cultural interactions. The ultimate aim of this VLE is to allow users to reflect upon and learn about intercultural communication and collaboration. Rather than predefining the interactions among the virtual agents and scripting the possible interactions afforded by this environment, we pursue a bottomup approach whereby inter-cultural communication emerges from interactions with and among autonomous agents and the user(s). The intelligent virtual agents that are inhabiting this environment are expected to be able to broaden their knowledge about the world and other agents, which may be of different cultural backgrounds, through interactions. This work is part of a collaborative effort within a European research project called eCIRCUS. Specifically, this article focuses on our continuing research concerned with emotional knowledge learning in autobiographic social agents.Peer reviewe
Affective interactions between expressive characters
When people meet in virtual worlds they are represented by computer animated characters that lack a variety of expression and can seem stiff and robotic. By comparison human bodies are highly expressive; a casual observation of a group of people mil reveals a large diversity of behavior, different postures, gestures and complex patterns of eye gaze. In order to make computer mediated communication between people more like real face-to-face communication, it is necessary to add an affective dimension. This paper presents Demeanour, an affective semi-autonomous system for the generation of realistic body language in avatars. Users control their avatars that in turn interact autonomously with other avatars to produce expressive behaviour. This allows people to have affectively rich interactions via their avatars
Designing gestures for affective input: an analysis of shape, effort and valence
We discuss a user-centered approach to incorporating affective expressions in interactive applications, and argue for a design that addresses both body and mind. In particular, we have studied the problem of finding a set of affective gestures. Based on previous work in movement analysis and emotion theory [Davies, Laban and Lawrence, Russell], and a study of an actor expressing emotional states in body movements, we have identified three underlying dimensions of movements and emotions: shape, effort and valence. From these dimensions we have created a new affective interaction model, which we name the affective gestural plane model. We applied this model to the design of gestural affective input to a mobile service for affective messages
- âŠ