1,231 research outputs found

    Interactive Embodied Agents for Cultural Heritage and Archaeological presentations

    Full text link
    [EN] In this paper, Maxine, a powerful engine to develop applications with embodied animated agents is presented. The engine, based on the use of open source libraries, enables multimodal real-time interaction with the user: via text, voice, images and gestures. Maxine virtual agents can establish emotional communication with the user through their facial expressions, the modulation of the voice and expressing the answers of the agents according to the information gathered by the system: noise level in the room, observer’s position, emotional state of the observer, etc. Moreover, the user’s emotions are considered and captured through images. For the moment, Maxine virtual agents have been used as virtual presenters for Cultural Heritage and Archaeological shows.This work has been partially financed by the Spanish “Dirección General de Investigación'' (General Directorate of Research), contract number Nº TIN2007-63025, and by the Regional Government of Aragon through the WALQA agreement.Seron, F.; Baldassarri, S.; Cerezo, E. (2010). Interactive Embodied Agents for Cultural Heritage and Archaeological presentations. Virtual Archaeology Review. 1(1):181-184. https://doi.org/10.4995/var.2010.5143OJS18118411BALDASSARRI, S., CEREZO, E., SERON, F. (2007): An open source engine for embodied animated agents.In Proc. Congreso Español de Informática Gráfica: CEIG'07, pp. 89-98.BERRY, D.et al, (2005). Evaluating a realistic agent in an advice-giving task. In International Journal in Human-Computer Studies, Nº 63, pp. 304-327. http://dx.doi.org/10.1016/j.ijhcs.2005.03.006BOFF, E. et al, (2005). An affective agent-based virtual character for learning environments. Proceedings of the Wokshop on Motivation and Affect in Educational Software, 12th International Conference on Artificial Intelligence in Education. Amsterdam, Holland, pp 1-8.BURLESON, W. et al, (2004). A Platform for Affective Agent Research. Proceedings of the Workshop on Empathetic Agents, International Conference on Autonomous Agents and Multiagent Systems, New York, USA.CEREZO, E., BALDASSARRI, S., SERON, F. (2007): Interactive agents for multimodal emotional user interaction. In Proc. of IADIS International Conference Interfaces and Human Computer Interaction, pp. 35-42.CASELL, J. et al (eds), (2000), in Embodied Conversational Agents. MIT Press, Cambridge, USA.El-NASR, M. S. et al, (1999). A PET with Evolving Emotional Intelligence. Proceedings of the 3rd Annual Conference on Autonomous Agents. Seattle, USA, pp. 9 - 15. http://dx.doi.org/10.1145/301136.301150GRAESSER, A. et al, (2005). AutoTutor: An Intelligent tutoring system with mixed-initiative dialogue. In IEEE Transactions on Education, Vol. 48, Nº 4, pp. 612-618. http://dx.doi.org/10.1109/TE.2005.856149KASAP, Z. and N. MAGNENAT-THALMANN (2007): "Intelligent virtual humans with autonomy and personality: State-of-the-art", in IntelligentDecision Technologies. IOS PressMARSELLA S. C et al, (2000). Interactive Pedagogical Drama. Proceedings of the 4th International Conference on Autonomous Agents. Barcelona, Spain, pp. 301-308. http://dx.doi.org/10.1145/336595.337507MIGNONNEAU, L. and SOMMERER, C. (2005). Designing emotional, methaforic, natural and intuitive interfaces for interactive art, edutainment and mobile communications, in Computer & Graphics, Vol. 29, pp. 837-851.PRENDINGER, H. and ISHIZUKA, M., (2005). The Empathic Companion: A Character-Based Interface that Addresses Users' Affective States. In Applied Artificial Intelligence, Vol.19, pp.267-285. http://dx.doi.org/10.1080/08839510590910174ROSIS, F. et al, (2003). From Greta's mind to her face: modelling the dynamics of affective status in a conversational embodied agent. In International Journal of Human-computer Studies. Special Issue on Applications of Affective Computing in HCI, Vol 59, pp 81-118. http://dx.doi.org/10.1016/s1071-5819(03)00020-xYUAN, X. and CHEE, S. (2005). Design and evaluation of Elva: an embodied tour guide in an interactive virtual art gallery. In Computer Animation and Virtual Worlds, Vol. 16, pp.109-119. http://dx.doi.org/10.1002/cav.6

    A mobile fitness companion

    Get PDF
    The paper introduces a Mobile Companion prototype, which helps users to plan and keep track of their exercise activities via an interface based mainly on speech input and output. The Mobile Companion runs on a PDA and is based on a stand-alone, speaker-independent solution, making it fairly unique among mobile spoken dialogue systems, where the common solution is to run the ASR on a separate server or to restrict the speech input to some specific set of users. The prototype uses a GPS receiver to collect position, distance and speed data while the user is exercising, and allows the data to be compared to previous exercises. It communicates over the mobile network with a stationary system, placed in the user’s home. This allows plans for exercise activities to be downloaded from the stationary to the mobile system, and exercise result data to be uploaded once an exercise has been completed

    On the Development of Adaptive and User-Centred Interactive Multimodal Interfaces

    Get PDF
    Multimodal systems have attained increased attention in recent years, which has made possible important improvements in the technologies for recognition, processing, and generation of multimodal information. However, there are still many issues related to multimodality which are not clear, for example, the principles that make it possible to resemble human-human multimodal communication. This chapter focuses on some of the most important challenges that researchers have recently envisioned for future multimodal interfaces. It also describes current efforts to develop intelligent, adaptive, proactive, portable and affective multimodal interfaces

    Sistemas de diálogo: una revisión

    Get PDF
    Spoken dialogue systems are computer programs developed to interact with users employing speech in order to provide them with specific automated services. The interaction is carried out by means of dialogue turns, which in many studies available in the literature, researchers aim to make as similar as possible to those between humans in terms of naturalness, intelligence and affective content. In this paper we describe the fundaments of these systems including the main technologies employed for their development. We also present an evolution of this technology and discuss some current applications. Moreover, we discuss development paradigms, including scripting languages and the development of conversational interfaces for mobile apps. The correct modelling of the user is a key aspect of this technology. This is why we also describe affective, personality and contextual models. Finally, we address some current research trends in terms of verbal communication, multimodal interaction and dialogue management.Los sistemas de diálogo son programas de ordenador desarrollados para interaccionar con los usuarios mediante habla, con la finalidad de proporcionarles servicios automatizados. La interacción se lleva a cabo mediante turnos de un tipo de diálogo que, en muchos estudios existentes en la literatura, los investigadores intentan que se parezca lo más posible al diálogo real que se lleva a cabo entre las personas en lo que se refiere a naturalidad, inteligencia y contenido afectivo. En este artículo describimos los fundamentos de esta tecnología, incluyendo las tecnologías básicas que se utilizan para implementar este tipo de sistemas. También presentamos una evolución de la tecnología y comentamos algunas aplicaciones actuales. Asimismo, describimos paradigmas de interacción, incluyendo lenguajes de script y desarrollo de interfaces conversacionales para aplicaciones móviles. Un aspecto clave de esta tecnología consiste en realizar un correcto modelado del usuario. Por este motivo, discutimos diversos modelos afectivos, de personalidad y contextuales. Finalmente, comentamos algunas líneas de investigación actuales relacionadas con la comunicación verbal, interacción multimodal y gestión del diálogo
    corecore