192 research outputs found

    Interactive Embodied Agents for Cultural Heritage and Archaeological presentations

    Full text link
    [EN] In this paper, Maxine, a powerful engine to develop applications with embodied animated agents is presented. The engine, based on the use of open source libraries, enables multimodal real-time interaction with the user: via text, voice, images and gestures. Maxine virtual agents can establish emotional communication with the user through their facial expressions, the modulation of the voice and expressing the answers of the agents according to the information gathered by the system: noise level in the room, observer’s position, emotional state of the observer, etc. Moreover, the user’s emotions are considered and captured through images. For the moment, Maxine virtual agents have been used as virtual presenters for Cultural Heritage and Archaeological shows.This work has been partially financed by the Spanish “Dirección General de Investigación'' (General Directorate of Research), contract number Nº TIN2007-63025, and by the Regional Government of Aragon through the WALQA agreement.Seron, F.; Baldassarri, S.; Cerezo, E. (2010). Interactive Embodied Agents for Cultural Heritage and Archaeological presentations. Virtual Archaeology Review. 1(1):181-184. https://doi.org/10.4995/var.2010.5143OJS18118411BALDASSARRI, S., CEREZO, E., SERON, F. (2007): An open source engine for embodied animated agents.In Proc. Congreso Español de Informática Gráfica: CEIG'07, pp. 89-98.BERRY, D.et al, (2005). Evaluating a realistic agent in an advice-giving task. In International Journal in Human-Computer Studies, Nº 63, pp. 304-327. http://dx.doi.org/10.1016/j.ijhcs.2005.03.006BOFF, E. et al, (2005). An affective agent-based virtual character for learning environments. Proceedings of the Wokshop on Motivation and Affect in Educational Software, 12th International Conference on Artificial Intelligence in Education. Amsterdam, Holland, pp 1-8.BURLESON, W. et al, (2004). A Platform for Affective Agent Research. Proceedings of the Workshop on Empathetic Agents, International Conference on Autonomous Agents and Multiagent Systems, New York, USA.CEREZO, E., BALDASSARRI, S., SERON, F. (2007): Interactive agents for multimodal emotional user interaction. In Proc. of IADIS International Conference Interfaces and Human Computer Interaction, pp. 35-42.CASELL, J. et al (eds), (2000), in Embodied Conversational Agents. MIT Press, Cambridge, USA.El-NASR, M. S. et al, (1999). A PET with Evolving Emotional Intelligence. Proceedings of the 3rd Annual Conference on Autonomous Agents. Seattle, USA, pp. 9 - 15. http://dx.doi.org/10.1145/301136.301150GRAESSER, A. et al, (2005). AutoTutor: An Intelligent tutoring system with mixed-initiative dialogue. In IEEE Transactions on Education, Vol. 48, Nº 4, pp. 612-618. http://dx.doi.org/10.1109/TE.2005.856149KASAP, Z. and N. MAGNENAT-THALMANN (2007): "Intelligent virtual humans with autonomy and personality: State-of-the-art", in IntelligentDecision Technologies. IOS PressMARSELLA S. C et al, (2000). Interactive Pedagogical Drama. Proceedings of the 4th International Conference on Autonomous Agents. Barcelona, Spain, pp. 301-308. http://dx.doi.org/10.1145/336595.337507MIGNONNEAU, L. and SOMMERER, C. (2005). Designing emotional, methaforic, natural and intuitive interfaces for interactive art, edutainment and mobile communications, in Computer & Graphics, Vol. 29, pp. 837-851.PRENDINGER, H. and ISHIZUKA, M., (2005). The Empathic Companion: A Character-Based Interface that Addresses Users' Affective States. In Applied Artificial Intelligence, Vol.19, pp.267-285. http://dx.doi.org/10.1080/08839510590910174ROSIS, F. et al, (2003). From Greta's mind to her face: modelling the dynamics of affective status in a conversational embodied agent. In International Journal of Human-computer Studies. Special Issue on Applications of Affective Computing in HCI, Vol 59, pp 81-118. http://dx.doi.org/10.1016/s1071-5819(03)00020-xYUAN, X. and CHEE, S. (2005). Design and evaluation of Elva: an embodied tour guide in an interactive virtual art gallery. In Computer Animation and Virtual Worlds, Vol. 16, pp.109-119. http://dx.doi.org/10.1002/cav.6

    A Mimetic Strategy to Engage Voluntary Physical Activity In Interactive Entertainment

    Full text link
    We describe the design and implementation of a vision based interactive entertainment system that makes use of both involuntary and voluntary control paradigms. Unintentional input to the system from a potential viewer is used to drive attention-getting output and encourage the transition to voluntary interactive behaviour. The iMime system consists of a character animation engine based on the interaction metaphor of a mime performer that simulates non-verbal communication strategies, without spoken dialogue, to capture and hold the attention of a viewer. The system was developed in the context of a project studying care of dementia sufferers. Care for a dementia sufferer can place unreasonable demands on the time and attentional resources of their caregivers or family members. Our study contributes to the eventual development of a system aimed at providing relief to dementia caregivers, while at the same time serving as a source of pleasant interactive entertainment for viewers. The work reported here is also aimed at a more general study of the design of interactive entertainment systems involving a mixture of voluntary and involuntary control.Comment: 6 pages, 7 figures, ECAG08 worksho

    Multimodal Adapted Robot Behavior Synthesis within a Narrative Human-Robot Interaction

    Get PDF
    International audienceIn human-human interaction, three modalities of communication (i.e., verbal, nonverbal, and paraverbal) are naturally coordinated so as to enhance the meaning of the conveyed message. In this paper, we try to create a similar coordination between these modalities of communication in order to make the robot behave as naturally as possible. The proposed system uses a group of videos in order to elicit specific target emotions in a human user, upon which interactive narratives will start (i.e., interactive discussions between the participant and the robot around each video's content). During each interaction experiment, the humanoid expressive ALICE robot engages and generates an adapted multimodal behavior to the emotional content of the projected video using speech, head-arm metaphoric gestures, and/or facial expressions. The interactive speech of the robot is synthesized using Mary-TTS (text to speech toolkit), which is used-in parallel-to generate adapted head-arm gestures [1]. This synthesized multimodal robot behavior is evaluated by the interacting human at the end of each emotion-eliciting experiment. The obtained results validate the positive effect of the generated robot behavior multimodality on interaction

    Individuality and Contextual Variation of Character Behaviour for Interactive Narrative.

    Get PDF
    This paper presents a system for generating non-verbal communication behaviour suitable for characters in interactive narrative. It is possible to customise the behaviour of individual character using a system of character profiles. This allows characters to have a strong individuality and personality. These same profiles also allow the characters’ behaviour to be altered in different contexts, allowing for suitably changing behaviour as the story unfolds

    Real Time Virtual Humans

    Get PDF
    The last few years have seen great maturation in the computation speed and control methods needed to portray 3D virtual humans suitable for real interactive applications. Various dimensions of real-time virtual humans are considered, such as appearance and movement, autonomous action, and skills such as gesture, attention, and locomotion. A virtual human architecture includes low level motor skills, mid-level PaT-Net parallel finite-state machine controller, and a high level conceptual action representation that can be used to drive virtual humans through complex tasks. This structure offers a deep connection between natural language instructions and animation control

    Advanced Content and Interface Personalization through Conversational Behavior and Affective Embodied Conversational Agents

    Get PDF
    Conversation is becoming one of the key interaction modes in HMI. As a result, the conversational agents (CAs) have become an important tool in various everyday scenarios. From Apple and Microsoft to Amazon, Google, and Facebook, all have adapted their own variations of CAs. The CAs range from chatbots and 2D, carton-like implementations of talking heads to fully articulated embodied conversational agents performing interaction in various concepts. Recent studies in the field of face-to-face conversation show that the most natural way to implement interaction is through synchronized verbal and co-verbal signals (gestures and expressions). Namely, co-verbal behavior represents a major source of discourse cohesion. It regulates communicative relationships and may support or even replace verbal counterparts. It effectively retains semantics of the information and gives a certain degree of clarity in the discourse. In this chapter, we will represent a model of generation and realization of more natural machine-generated output

    Look me in the eyes: A survey of eye and gaze animation for virtual agents and artificial systems

    Get PDF
    International audienceA person's emotions and state of mind are apparent in their face and eyes. As a Latin proverb states: "The face is the portrait of the mind; the eyes, its informers.". This presents a huge challenge for computer graphics researchers in the generation of artificial entities that aim to replicate the movement and appearance of the human eye, which is so important in human-human interactions. This State of the Art Report provides an overview of the efforts made on tackling this challenging task. As with many topics in Computer Graphics, a cross-disciplinary approach is required to fully understand the workings of the eye in the transmission of information to the user. We discuss the movement of the eyeballs, eyelids, and the head from a physiological perspective and how these movements can be modelled, rendered and animated in computer graphics applications. Further, we present recent research from psychology and sociology that seeks to understand higher level behaviours, such as attention and eye-gaze, during the expression of emotion or during conversation, and how they are synthesised in Computer Graphics and Robotics

    Customisation and Context for Expressive Behaviour in the Broadband World

    Get PDF
    The introduction of consumer broadband makes it possible to have an emotionally much richer experience of the internet. One way of achieving this is the use of animated characters endowed with emotionally expressive behaviour. This paper describes Demeanour, a framework for generating expressive behaviour, developed collaboratively by University College London and BT plc. The focus of this paper will be on two important aspects; the customisation of expressive behaviour and how expressive behaviour can be made context dependent. Customisation is a very popular feature for internet software, particularly as it allows users to present a specific identity to other users; the ability to customise beahviour will increase this sense of identity. Demeanour supports a number of user friendly methods for customisng behaviour, all of which use a character profile that ultimately controls the behaviour of the character. What counts as appropriate behaviour is highly dependent on the context, where you are, who you are talking to, whether you have a particular job or role. It is therefore very important that characters are able to exhibit different behaviours in different contexts. Demeanour allows characters to load different profiles in different contexts and therefore produce different behaviour
    corecore