42,070 research outputs found

    Creating and Capturing Artificial Emotions in Autonomous Robots and Software Agents

    Get PDF
    This paper presents ARTEMIS, a control system for autonomous robots or software agents. ARTEMIS is able to create and capture artificial emotions during interactions with its environment, and we describe the underlying mechanisms for this. The control system also realizes the capturing of knowledge about its past artificial emotions. A specific interpretation of a knowledge graph, called an Agent Knowledge Graph, represents these artificial emotions. For this, we devise a formalism which enriches the traditional factual knowledge in knowledge graphs with the representation of artificial emotions. As proof of concept, we realize a concrete software agent based on the ARTEMIS control system. This software agent acts as a user assistant and executes the user’s orders. The environment of this user assistant consists of autonomous service agents. The execution of user’s orders requires interaction with these autonomous service agents. These interactions lead to artificial emotions within the assistant. The first experiments show that it is possible to realize an autonomous agent with plausible artificial emotions with ARTEMIS and to record these artificial emotions in its Agent Knowledge Graph. In this way, autonomous agents based on ARTEMIS can capture essential knowledge that supports successful planning and decision making in complex dynamic environments and surpass emotionless agents

    Issues of Emotion-Based Multi-Agent System

    Get PDF
    Emotion plays a significant contribution in perceptual processes of psychology and neuroscience research. Gently, area of Artificial Intelligent and Artificial Life in simulation and cognitive processes modeling uses this knowledge of emotions. Psychology and neuroscience researches are increasingly show how emotion plays an important role in cognitive processes. Gradually, this knowledge is being used in Artificial Intelligent and Artificial Life areas in simulation and cognitive processes modeling. Researchers are still not very clear about working of mind to generate emotion. Different people have different emotion at the same time and for same situation. Thus, to generate artificial emotion for agents is very complex task. Each agent and its emotion are autonomous but when we work on multi-agent system. Agents have to cooperate and coordinate with each other. In this paper we are discussing the role of emotions in multi-agent system while decision making, coordinate and cooperate with other agents. Also, we are about to discuss some major issues related to Artificial Emotion (AE) that should be considered when any research is proposed for it. In this paper we are discussing the role of emotions in multi-agent system while decision making, coordinate and cooperate with other agents. Also, we are about to discuss some major issues related to it that should be considered when any research is proposed for Artificial Emotion (AE)

    An Ethically-Guided Domain-Independent Model of Computational Emotions

    Full text link
    University of Technology Sydney. Faculty of Engineering and Information Technology.Advancement of artificial intelligence research has supported the development of intelligent autonomous agents. Such intelligent agents, like social robots, are already appearing in public places, homes and offices. Unlike the robots intended for use in factories for mechanical work, social robots should not only be proficient in capabilities such as vision and speech, but also be endowed with other human skills in order to facilitate a sound relationship with human counterparts. Phenomena of emotions is a distinguishable human feature that plays a significant role in human social communication because ability to express emotions enhances the social exchange between two individuals. As such, artificial agents employed in social settings should also exhibit adequate emotional and behavioural abilities to be easily adopted by people. A critical aspect to consider when developing models of artificial emotions for autonomous intelligent agents is the likely impact that the emotional interaction can have on the human counterparts. For example, an that shows an angry expression along with a loud voice may scare a young child more than a - that only denies a request. Indeed, most modern societies consider a strong emotional reaction towards a young child to be unacceptable and even unethical. How can a robot select a socially acceptable emotional state to express while interacting with people? I answer this question by providing an association between emotion theories and ethical theories – which has largely been ignored in the existing literature. A regulatory mechanism for artificial agents inspired by ethical theories is a viable way to ensure that the emotional and behavioural responses of the agent are acceptable in a given social context. As such, an intelligent agent with emotion generation capability can establish social acceptance if its emotions are regulated by ethical reasoning mechanism. In order to validate the above statement, in this work, I provide a novel computational model of emotion for artificial agents – EEGS (short name for thical motion eneration ystem) and evaluate it by comparing the emotional responses of the model with emotion data collected from human participants. Experimental results support that

    Motivations, Values and Emotions: 3 sides of the same coin

    Get PDF
    This position paper speaks to the interrelationships between the three concepts of motivations, values, and emotion. Motivations prime actions, values serve to choose between motivations, emotions provide a common currency for values, and emotions implement motivations. While conceptually distinct, the three are so pragmatically intertwined as to differ primarily from our taking different points of view. To make these points more transparent, we briefly describe the three in the context a cognitive architecture, the LIDA model, for software agents and robots that models human cognition, including a developmental period. We also compare the LIDA model with other models of cognition, some involving learning and emotions. Finally, we conclude that artificial emotions will prove most valuable as implementers of motivations in situations requiring learning and development

    Artificial morality: Making of the artificial moral agents

    Get PDF
    Abstract: Artificial Morality is a new, emerging interdisciplinary field that centres around the idea of creating artificial moral agents, or AMAs, by implementing moral competence in artificial systems. AMAs are ought to be autonomous agents capable of socially correct judgements and ethically functional behaviour. This request for moral machines comes from the changes in everyday practice, where artificial systems are being frequently used in a variety of situations from home help and elderly care purposes to banking and court algorithms. It is therefore important to create reliable and responsible machines based on the same ethical principles that society demands from people. New challenges in creating such agents appear. There are philosophical questions about a machine’s potential to be an agent, or mora l agent, in the first place. Then comes the problem of social acceptance of such machines, regardless of their theoretic agency status. As a result of efforts to resolve this problem, there are insinuations of needed additional psychological (emotional and cogn itive) competence in cold moral machines. What makes this endeavour of developing AMAs even harder is the complexity of the technical, engineering aspect of their creation. Implementation approaches such as top- down, bottom-up and hybrid approach aim to find the best way of developing fully moral agents, but they encounter their own problems throughout this effort

    Embodied Robot Models for Interdisciplinary Emotion Research

    Get PDF
    Due to their complex nature, emotions cannot be properly understood from the perspective of a single discipline. In this paper, I discuss how the use of robots as models is beneficial for interdisciplinary emotion research. Addressing this issue through the lens of my own research, I focus on a critical analysis of embodied robots models of different aspects of emotion, relate them to theories in psychology and neuroscience, and provide representative examples. I discuss concrete ways in which embodied robot models can be used to carry out interdisciplinary emotion research, assessing their contributions: as hypothetical models, and as operational models of specific emotional phenomena, of general emotion principles, and of specific emotion ``dimensions''. I conclude by discussing the advantages of using embodied robot models over other models.Peer reviewe

    Affect and believability in game characters:a review of the use of affective computing in games

    Get PDF
    Virtual agents are important in many digital environments. Designing a character that highly engages users in terms of interaction is an intricate task constrained by many requirements. One aspect that has gained more attention recently is the effective dimension of the agent. Several studies have addressed the possibility of developing an affect-aware system for a better user experience. Particularly in games, including emotional and social features in NPCs adds depth to the characters, enriches interaction possibilities, and combined with the basic level of competence, creates a more appealing game. Design requirements for emotionally intelligent NPCs differ from general autonomous agents with the main goal being a stronger player-agent relationship as opposed to problem solving and goal assessment. Nevertheless, deploying an affective module into NPCs adds to the complexity of the architecture and constraints. In addition, using such composite NPC in games seems beyond current technology, despite some brave attempts. However, a MARPO-type modular architecture would seem a useful starting point for adding emotions
    • …
    corecore