5,171 research outputs found

    Embodied Robot Models for Interdisciplinary Emotion Research

    Get PDF
    Due to their complex nature, emotions cannot be properly understood from the perspective of a single discipline. In this paper, I discuss how the use of robots as models is beneficial for interdisciplinary emotion research. Addressing this issue through the lens of my own research, I focus on a critical analysis of embodied robots models of different aspects of emotion, relate them to theories in psychology and neuroscience, and provide representative examples. I discuss concrete ways in which embodied robot models can be used to carry out interdisciplinary emotion research, assessing their contributions: as hypothetical models, and as operational models of specific emotional phenomena, of general emotion principles, and of specific emotion ``dimensions''. I conclude by discussing the advantages of using embodied robot models over other models.Peer reviewe

    URBANO: A Tour-Guide Robot Learning to Make Better Speeches

    Get PDF
    —Thanks to the numerous attempts that are being made to develop autonomous robots, increasingly intelligent and cognitive skills are allowed. This paper proposes an automatic presentation generator for a robot guide, which is considered one more cognitive skill. The presentations are made up of groups of paragraphs. The selection of the best paragraphs is based on a semantic understanding of the characteristics of the paragraphs, on the restrictions defined for the presentation and by the quality criteria appropriate for a public presentation. This work is part of the ROBONAUTA project of the Intelligent Control Research Group at the Universidad Politécnica de Madrid to create "awareness" in a robot guide. The software developed in the project has been verified on the tour-guide robot Urbano. The most important aspect of this proposal is that the design uses learning as the means to optimize the quality of the presentations. To achieve this goal, the system has to perform the optimized decision making, in different phases. The modeling of the quality index of the presentation is made using fuzzy logic and it represents the beliefs of the robot about what is good, bad, or indifferent about a presentation. This fuzzy system is used to select the most appropriate group of paragraphs for a presentation. The beliefs of the robot continue to evolving in order to coincide with the opinions of the public. It uses a genetic algorithm for the evolution of the rules. With this tool, the tour guide-robot shows the presentation, which satisfies the objectives and restrictions, and automatically it identifies the best paragraphs in order to find the most suitable set of contents for every public profil

    Behavior-Based Early Language Development on a Humanoid Robot

    Get PDF
    We are exploring the idea that early language acquisition could be better modelled on an artifcial creature by considering the pragmatic aspect of natural language and of its development in human infants. We have implemented a system of vocal behaviors on Kismet in which "words" or concepts are behaviors in a competitive hierarchy. This paper reports on the framework, the vocal system's architecture and algorithms, and some preliminary results from vocal label learning and concept formation

    Outline of a sensory-motor perspective on intrinsically moral agents

    Get PDF
    This is the accepted version of the following article: Christian Balkenius, Lola Cañamero, Philip Pärnamets, Birger Johansson, Martin V Butz, and Andreas Olson, ‘Outline of a sensory-motor perspective on intrinsically moral agents’, Adaptive Behaviour, Vol 24(5): 306-319, October 2016, which has been published in final form at DOI: https://doi.org/10.1177/1059712316667203 Published by SAGE ©The Author(s) 2016We propose that moral behaviour of artificial agents could (and should) be intrinsically grounded in their own sensory-motor experiences. Such an ability depends critically on seven types of competencies. First, intrinsic morality should be grounded in the internal values of the robot arising from its physiology and embodiment. Second, the moral principles of robots should develop through their interactions with the environment and with other agents. Third, we claim that the dynamics of moral (or social) emotions closely follows that of other non-social emotions used in valuation and decision making. Fourth, we explain how moral emotions can be learned from the observation of others. Fifth, we argue that to assess social interaction, a robot should be able to learn about and understand responsibility and causation. Sixth, we explain how mechanisms that can learn the consequences of actions are necessary for a robot to make moral decisions. Seventh, we describe how the moral evaluation mechanisms outlined can be extended to situations where a robot should understand the goals of others. Finally, we argue that these competencies lay the foundation for robots that can feel guilt, shame and pride, that have compassion and that know how to assign responsibility and blame.Peer reviewedFinal Accepted Versio

    "Involving Interface": An Extended Mind Theoretical Approach to Roboethics

    Get PDF
    In 2008 the authors held Involving Interface, a lively interdisciplinary event focusing on issues of biological, sociocultural, and technological interfacing (see Acknowledgments). Inspired by discussions at this event, in this article, we further discuss the value of input from neuroscience for developing robots and machine interfaces, and the value of philosophy, the humanities, and the arts for identifying persistent links between human interfacing and broader ethical concerns. The importance of ongoing interdisciplinary debate and public communication on scientific and technical advances is also highlighted. Throughout, the authors explore the implications of the extended mind hypothesis for notions of moral accountability and robotics

    No Grice: Computers that Lie, Deceive and Conceal

    Get PDF
    In the future our daily life interactions with other people, with computers, robots and smart environments will be recorded and interpreted by computers or embedded intelligence in environments, furniture, robots, displays, and wearables. These sensors record our activities, our behavior, and our interactions. Fusion of such information and reasoning about such information makes it possible, using computational models of human behavior and activities, to provide context- and person-aware interpretations of human behavior and activities, including determination of attitudes, moods, and emotions. Sensors include cameras, microphones, eye trackers, position and proximity sensors, tactile or smell sensors, et cetera. Sensors can be embedded in an environment, but they can also move around, for example, if they are part of a mobile social robot or if they are part of devices we carry around or are embedded in our clothes or body. \ud \ud Our daily life behavior and daily life interactions are recorded and interpreted. How can we use such environments and how can such environments use us? Do we always want to cooperate with these environments; do these environments always want to cooperate with us? In this paper we argue that there are many reasons that users or rather human partners of these environments do want to keep information about their intentions and their emotions hidden from these smart environments. On the other hand, their artificial interaction partner may have similar reasons to not give away all information they have or to treat their human partner as an opponent rather than someone that has to be supported by smart technology.\ud \ud This will be elaborated in this paper. We will survey examples of human-computer interactions where there is not necessarily a goal to be explicit about intentions and feelings. In subsequent sections we will look at (1) the computer as a conversational partner, (2) the computer as a butler or diary companion, (3) the computer as a teacher or a trainer, acting in a virtual training environment (a serious game), (4) sports applications (that are not necessarily different from serious game or education environments), and games and entertainment applications

    Когнитивни процеси, емоции и интелигентни интерфејси

    Get PDF
    Студијата презентира истражувања од повеќе научни дисциплини, како вештачка интелигенција, невронауки, психологија, лингвистика и филозофија, кои имаат потенцијал за креирање на интелигентни антропоморфни агенти и интерактивни технологии. Се разгледуваат системите од симболичка и конекционистичка вештачка интелигенција за моделирање на човековите когнитивни процеси, мислење, донесување одлуки, меморија и учење. Се анализираат моделите во вештачка интелигенција и роботика кои користат емоции како механизам за контрола на остварување на целите на роботот, како реакција на одредени ситуации, за одржување на процесот на социјална интеракција и за создавање на поуверливи антропормфни агенти. Презентираните интердисциплинарни методологии и концепти се мотивација за создавање на анимирани агенти кои користат говор, гестови, интонација и други невербални модалитети при конверзација со корисниците во интелигентните интерфејси

    Explorations in engagement for humans and robots

    Get PDF
    This paper explores the concept of engagement, the process by which individuals in an interaction start, maintain and end their perceived connection to one another. The paper reports on one aspect of engagement among human interactors--the effect of tracking faces during an interaction. It also describes the architecture of a robot that can participate in conversational, collaborative interactions with engagement gestures. Finally, the paper reports on findings of experiments with human participants who interacted with a robot when it either performed or did not perform engagement gestures. Results of the human-robot studies indicate that people become engaged with robots: they direct their attention to the robot more often in interactions where engagement gestures are present, and they find interactions more appropriate when engagement gestures are present than when they are not.Comment: 31 pages, 5 figures, 3 table
    corecore