90,234 research outputs found

    Neuroethology, Computational

    No full text
    Over the past decade, a number of neural network researchers have used the term computational neuroethology to describe a specific approach to neuroethology. Neuroethology is the study of the neural mechanisms underlying the generation of behavior in animals, and hence it lies at the intersection of neuroscience (the study of nervous systems) and ethology (the study of animal behavior); for an introduction to neuroethology, see Simmons and Young (1999). The definition of computational neuroethology is very similar, but is not quite so dependent on studying animals: animals just happen to be biological autonomous agents. But there are also non-biological autonomous agents such as some types of robots, and some types of simulated embodied agents operating in virtual worlds. In this context, autonomous agents are self-governing entities capable of operating (i.e., coordinating perception and action) for extended periods of time in environments that are complex, uncertain, and dynamic. Thus, computational neuroethology can be characterised as the attempt to analyze the computational principles underlying the generation of behavior in animals and in artificial autonomous agents

    Agent, autonomous

    No full text
    International audienceThe expression autonomous agents, widely used in virtual reality, computer graphics, artificial intelligence and artificial life, corresponds to the simulation of autonomous creatures, virtual (i.e. totally computed by a program), or embodied in a physical envelope, as done in autonomous robots

    Symbol Emergence in Robotics: A Survey

    Full text link
    Humans can learn the use of language through physical interaction with their environment and semiotic communication with other people. It is very important to obtain a computational understanding of how humans can form a symbol system and obtain semiotic skills through their autonomous mental development. Recently, many studies have been conducted on the construction of robotic systems and machine-learning methods that can learn the use of language through embodied multimodal interaction with their environment and other systems. Understanding human social interactions and developing a robot that can smoothly communicate with human users in the long term, requires an understanding of the dynamics of symbol systems and is crucially important. The embodied cognition and social interaction of participants gradually change a symbol system in a constructive manner. In this paper, we introduce a field of research called symbol emergence in robotics (SER). SER is a constructive approach towards an emergent symbol system. The emergent symbol system is socially self-organized through both semiotic communications and physical interactions with autonomous cognitive developmental agents, i.e., humans and developmental robots. Specifically, we describe some state-of-art research topics concerning SER, e.g., multimodal categorization, word discovery, and a double articulation analysis, that enable a robot to obtain words and their embodied meanings from raw sensory--motor information, including visual information, haptic information, auditory information, and acoustic speech signals, in a totally unsupervised manner. Finally, we suggest future directions of research in SER.Comment: submitted to Advanced Robotic

    X Workshop of Physical Agents (WAF'2009)

    Get PDF
    The Workshop of Physical Agents intends to be a forum for information and experience exchange in different areas regarding the concept of embodied agents, especially applied to the control and coordination of autonomous system: robots, mobile robots, industrial processes or complex system. This special issue is devoted to the selected papers presented at the WAF09 that took place from September 10th to 11th in the city of Cáceres (Spain)

    Representing and Parameterizing Agent Behaviors

    Get PDF
    The last few years have seen great maturation in understanding how to use computer graphics technology to portray 3D embodied characters or virtual humans. Unlike the off-line, animator-intensive methods used in the special effects industry, real-time embodied agents are expected to exist and interact with us live . They can be represent other people or function as autonomous helpers, teammates, or tutors enabling novel interactive educational and training applications. We should be able to interact and communicate with them through modalities we already use, such as language, facial expressions, and gesture. Various aspects and issues in real-time virtual humans will be discussed, including consistent parameterizations for gesture and facial actions using movement observation principles, and the representational basis for character believability, personality, and affect. We also describe a Parameterized Action Representation (PAR) that allows an agent to act, plan, and reason about its actions or actions of others. Besides embodying the semantics of human action, the PAR is designed for building future behaviors into autonomous agents and controlling the animation parameters that portray personality, mood, and affect in an embodied agent

    Special Issue on Advances on Physical Agents 2016

    Get PDF
    The Workshop on Physical Agents is a forum for information and experience exchange in different areas regarding the concept of embodied agents, especially applied to the control and coordination of autonomous systems: robots, mobile robots, domotics, agents, industrial applications or complex systems. This special issue brings together a selection of revised and extended papers that were first presented at the XVII Workshop on Physical Agents (WAF’2016), which was held on June 16-17, 2016 at the School of Telecommunication Engineering and Information Technology of the University of Málaga (Spain)

    Interactive Embodied Agents for Cultural Heritage and Archaeological presentations

    Full text link
    [EN] In this paper, Maxine, a powerful engine to develop applications with embodied animated agents is presented. The engine, based on the use of open source libraries, enables multimodal real-time interaction with the user: via text, voice, images and gestures. Maxine virtual agents can establish emotional communication with the user through their facial expressions, the modulation of the voice and expressing the answers of the agents according to the information gathered by the system: noise level in the room, observer’s position, emotional state of the observer, etc. Moreover, the user’s emotions are considered and captured through images. For the moment, Maxine virtual agents have been used as virtual presenters for Cultural Heritage and Archaeological shows.This work has been partially financed by the Spanish “Dirección General de Investigación'' (General Directorate of Research), contract number Nº TIN2007-63025, and by the Regional Government of Aragon through the WALQA agreement.Seron, F.; Baldassarri, S.; Cerezo, E. (2010). Interactive Embodied Agents for Cultural Heritage and Archaeological presentations. Virtual Archaeology Review. 1(1):181-184. https://doi.org/10.4995/var.2010.5143OJS18118411BALDASSARRI, S., CEREZO, E., SERON, F. (2007): An open source engine for embodied animated agents.In Proc. Congreso Español de Informática Gráfica: CEIG'07, pp. 89-98.BERRY, D.et al, (2005). Evaluating a realistic agent in an advice-giving task. In International Journal in Human-Computer Studies, Nº 63, pp. 304-327. http://dx.doi.org/10.1016/j.ijhcs.2005.03.006BOFF, E. et al, (2005). An affective agent-based virtual character for learning environments. Proceedings of the Wokshop on Motivation and Affect in Educational Software, 12th International Conference on Artificial Intelligence in Education. Amsterdam, Holland, pp 1-8.BURLESON, W. et al, (2004). A Platform for Affective Agent Research. Proceedings of the Workshop on Empathetic Agents, International Conference on Autonomous Agents and Multiagent Systems, New York, USA.CEREZO, E., BALDASSARRI, S., SERON, F. (2007): Interactive agents for multimodal emotional user interaction. In Proc. of IADIS International Conference Interfaces and Human Computer Interaction, pp. 35-42.CASELL, J. et al (eds), (2000), in Embodied Conversational Agents. MIT Press, Cambridge, USA.El-NASR, M. S. et al, (1999). A PET with Evolving Emotional Intelligence. Proceedings of the 3rd Annual Conference on Autonomous Agents. Seattle, USA, pp. 9 - 15. http://dx.doi.org/10.1145/301136.301150GRAESSER, A. et al, (2005). AutoTutor: An Intelligent tutoring system with mixed-initiative dialogue. In IEEE Transactions on Education, Vol. 48, Nº 4, pp. 612-618. http://dx.doi.org/10.1109/TE.2005.856149KASAP, Z. and N. MAGNENAT-THALMANN (2007): "Intelligent virtual humans with autonomy and personality: State-of-the-art", in IntelligentDecision Technologies. IOS PressMARSELLA S. C et al, (2000). Interactive Pedagogical Drama. Proceedings of the 4th International Conference on Autonomous Agents. Barcelona, Spain, pp. 301-308. http://dx.doi.org/10.1145/336595.337507MIGNONNEAU, L. and SOMMERER, C. (2005). Designing emotional, methaforic, natural and intuitive interfaces for interactive art, edutainment and mobile communications, in Computer & Graphics, Vol. 29, pp. 837-851.PRENDINGER, H. and ISHIZUKA, M., (2005). The Empathic Companion: A Character-Based Interface that Addresses Users' Affective States. In Applied Artificial Intelligence, Vol.19, pp.267-285. http://dx.doi.org/10.1080/08839510590910174ROSIS, F. et al, (2003). From Greta's mind to her face: modelling the dynamics of affective status in a conversational embodied agent. In International Journal of Human-computer Studies. Special Issue on Applications of Affective Computing in HCI, Vol 59, pp 81-118. http://dx.doi.org/10.1016/s1071-5819(03)00020-xYUAN, X. and CHEE, S. (2005). Design and evaluation of Elva: an embodied tour guide in an interactive virtual art gallery. In Computer Animation and Virtual Worlds, Vol. 16, pp.109-119. http://dx.doi.org/10.1002/cav.6

    Embodied agents in virtual environments: The Aveiro project

    Get PDF
    We present current and envisaged work on the AVEIRO project of our research group concerning virtual environments inhabited by autonomous embodied agents. These environments are being built for researching issues in human-computer interactions and intelligent agent applications. We describe the various strands of research and development that we are focussing on. The undertaking involves the collaborative effort of researchers from different disciplines
    • …
    corecore