9 research outputs found

    An autonomous social robot in fear

    Get PDF
    Currently artificial emotions are being extensively used in robots. Most of these implementations are employed to display affective states. Nevertheless, their use to drive the robot's behavior is not so common. This is the approach followed by the authors in this work. In this research, emotions are not treated in general but individually. Several emotions have been implemented in a real robot, but in this paper, authors focus on the use of the emotion of fear as an adaptive mechanism to avoid dangerous situations. In fact, fear is used as a motivation which guides the behavior during specific circumstances. Appraisal of fear is one of the cornerstones of this work. A novel mechanism learns to identify the harmful circumstances which cause damage to the robot. Hence, these circumstances elicit the fear emotion and are known as fear releasers. In order to prove the advantages of considering fear in our decision making system, the robot's performance with and without fear are compared and the behaviors are analyzed. The robot's behaviors exhibited in relation to fear are natural, i.e., the same kind of behaviors can be observed on animals. Moreover, they have not been preprogrammed, but learned by real interactions in the real world. All these ideas have been implemented in a real robot living in a laboratory and interacting with several items and people.The funds have been provided by the Spanish Government through the project called "A new approach to social robotics" (AROS), of MICINN (Ministry of Science and Innovation) and through the RoboCity2030- II-CM project (S2009/DPI-1559), funded by Programas de Actividades I+D en la Comunidad de Madrid and cofunded by Structural Funds of the EU

    A Bio-inspired Motivational Decision Making System for Social Robots Based on the Perception of the User

    Get PDF
    Nowadays, many robotic applications require robots making their own decisions and adapting to different conditions and users. This work presents a biologically inspired decision making system, based on drives, motivations, wellbeing, and self-learning, that governs the behavior of the robot considering both internal and external circumstances. In this paper we state the biological foundations that drove the design of the system, as well as how it has been implemented in a real robot. Following a homeostatic approach, the ultimate goal of the robot is to keep its wellbeing as high as possible. In order to achieve this goal, our decision making system uses learning mechanisms to assess the best action to execute at any moment. Considering that the proposed system has been implemented in a real social robot, human-robot interaction is of paramount importance and the learned behaviors of the robot are oriented to foster the interactions with the user. The operation of the system is shown in a scenario where the robot Mini plays games with a user. In this context, we have included a robust user detection mechanism tailored for short distance interactions. After the learning phase, the robot has learned how to lead the user to interact with it in a natural way.The research leading to these results has received funding from the projects: Development of social robots to help seniors with cognitive impairment (ROBSEN), funded by the Ministerio de Economia y Competitividad; and RoboCity2030-III-CM, funded by Comunidad de Madrid and cofunded by Structural Funds of the EU

    Learning Behaviors by an Autonomous Social Robot with Motivations

    Get PDF
    In this study, an autonomous social robot is living in a laboratory where it can interact with several items (people included). Its goal is to learn by itself the proper behaviors in order to maintain its well-being at as high a quality as possible. Several experiments have been conducted to test the performance of the system. The Object Q-Learning algorithm has been implemented in the robot as the learning algorithm. This algorithm is a variation of the traditional Q-Learning because it considers a reduced state space and collateral effects. The comparison of the performance of both algorithms is shown in the first part of the experiments. Moreover, two mechanisms intended to reduce the learning session durations have been included: Well-Balanced Exploration and Amplified Reward. Their advantages are justified in the results obtained in the second part of the experiments. Finally, the behaviors learned by our robot are analyzed. The resulting behaviors have not been preprogrammed. In fact, they have been learned by real interaction in the real world and are related to the motivations of the robot. These are natural behaviors in the sense that they can be easily understood by humans observing the robot.The authors gratefully acknowledge the funds provided by the Spanish Government through the project call "Aplicaciones de los robots sociales", DPI2011-26980 from the Spanish Ministry of Economy and Competitiveness.Publicad

    Reinforcement Learning Approaches in Social Robotics

    Full text link
    This article surveys reinforcement learning approaches in social robotics. Reinforcement learning is a framework for decision-making problems in which an agent interacts through trial-and-error with its environment to discover an optimal behavior. Since interaction is a key component in both reinforcement learning and social robotics, it can be a well-suited approach for real-world interactions with physically embodied social robots. The scope of the paper is focused particularly on studies that include social physical robots and real-world human-robot interactions with users. We present a thorough analysis of reinforcement learning approaches in social robotics. In addition to a survey, we categorize existent reinforcement learning approaches based on the used method and the design of the reward mechanisms. Moreover, since communication capability is a prominent feature of social robots, we discuss and group the papers based on the communication medium used for reward formulation. Considering the importance of designing the reward function, we also provide a categorization of the papers based on the nature of the reward. This categorization includes three major themes: interactive reinforcement learning, intrinsically motivated methods, and task performance-driven methods. The benefits and challenges of reinforcement learning in social robotics, evaluation methods of the papers regarding whether or not they use subjective and algorithmic measures, a discussion in the view of real-world reinforcement learning challenges and proposed solutions, the points that remain to be explored, including the approaches that have thus far received less attention is also given in the paper. Thus, this paper aims to become a starting point for researchers interested in using and applying reinforcement learning methods in this particular research field

    Human-Robot Collaborations in Industrial Automation

    Get PDF
    Technology is changing the manufacturing world. For example, sensors are being used to track inventories from the manufacturing floor up to a retail shelf or a customer’s door. These types of interconnected systems have been called the fourth industrial revolution, also known as Industry 4.0, and are projected to lower manufacturing costs. As industry moves toward these integrated technologies and lower costs, engineers will need to connect these systems via the Internet of Things (IoT). These engineers will also need to design how these connected systems interact with humans. The focus of this Special Issue is the smart sensors used in these human–robot collaborations

    On the margins: personhood and moral status in marginal cases of human rights

    Get PDF
    Most philosophical accounts of human rights accept that all persons have human rights. Typically, ‘personhood’ is understood as unitary and binary. It is unitary because there is generally supposed to be a single threshold property required for personhood (e.g. agency, rationality, etc.). It is binary because it is all-or-nothing: you are either a person or you are not. A difficulty with binary views is that there will typically be subjects, like children and those with dementia, who do not meet the threshold, and so who are not persons with human rights, on these accounts. It is consequently unclear how we ought to treat these subjects. This is the problem of marginal cases. I argue that we cannot resolve the problem of marginal cases if we accept a unitary, binary view of personhood. Instead, I develop a new non-binary personhood account of human rights, and defend two main claims. First, there are many, scalar properties, the having of which are conducive to personhood. Second, different subjects have different human rights depending on which of these properties they have, and what threats apply to them. On my view, and contra most existing accounts, most marginal cases have some degree of personhood and are entitled to some human rights
    corecore