4,619 research outputs found

    Human Perception of Intrinsically Motivated Autonomy in Human-Robot Interaction

    Get PDF
    Funding Information: The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: MS and DP acknowledge support by the socSMCs FET Proactive project [grant number H2020-641 321], and KD acknowledges funding from the Canada 150 Research Chairs Program. Publisher Copyright: © The Author(s) 2022.A challenge in using robots in human-inhabited environments is to design behavior that is engaging, yet robust to the perturbations induced by human interaction. Our idea is to imbue the robot with intrinsic motivation (IM) so that it can handle new situations and appears as a genuine social other to humans and thus be of more interest to a human interaction partner. Human-robot interaction (HRI) experiments mainly focus on scripted or teleoperated robots, that mimic characteristics such as IM to control isolated behavior factors. This article presents a "robotologist" study design that allows comparing autonomously generated behaviors with each other, and, for the first time, evaluates the human perception of IM-based generated behavior in robots. We conducted a within-subjects user study (N=24) where participants interacted with a fully autonomous Sphero BB8 robot with different behavioral regimes: one realizing an adaptive, intrinsically motivated behavior and the other being reactive, but not adaptive. The robot and its behaviors are intentionally kept minimal to concentrate on the effect induced by IM. A quantitative analysis of post-interaction questionnaires showed a significantly higher perception of the dimension "Warmth" compared to the reactive baseline behavior. Warmth is considered a primary dimension for social attitude formation in human social cognition. A human perceived as warm (friendly, trustworthy) experiences more positive social interactions.Peer reviewedFinal Published versio

    A perspective on lifelong open-ended learning autonomy for robotics through cognitive architectures

    Get PDF
    [Abstract]: This paper addresses the problem of achieving lifelong open-ended learning autonomy in robotics, and how different cognitive architectures provide functionalities that support it. To this end, we analyze a set of well-known cognitive architectures in the literature considering the different components they address and how they implement them. Among the main functionalities that are taken as relevant for lifelong open-ended learning autonomy are the fact that architectures must contemplate learning, and the availability of contextual memory systems, motivations or attention. Additionally, we try to establish which of them were actually applied to real robot scenarios. It transpires that in their current form, none of them are completely ready to address this challenge, but some of them do provide some indications on the paths to follow in some of the aspects they contemplate. It can be gleaned that for lifelong open-ended learning autonomy, motivational systems that allow finding domain-dependent goals from general internal drives, contextual long-term memory systems that all allow for associative learning and retrieval of knowledge, and robust learning systems would be the main components required. Nevertheless, other components, such as attention mechanisms or representation management systems, would greatly facilitate operation in complex domains.Ministerio de Ciencia e Innovación; PID2021-126220OB-I00Xunta de Galicia; EDC431C-2021/39Consellería de Cultura, Educación, Formación Profesional e Universidades; ED431G 2019/0

    Reinforcement Learning Approaches in Social Robotics

    Full text link
    This article surveys reinforcement learning approaches in social robotics. Reinforcement learning is a framework for decision-making problems in which an agent interacts through trial-and-error with its environment to discover an optimal behavior. Since interaction is a key component in both reinforcement learning and social robotics, it can be a well-suited approach for real-world interactions with physically embodied social robots. The scope of the paper is focused particularly on studies that include social physical robots and real-world human-robot interactions with users. We present a thorough analysis of reinforcement learning approaches in social robotics. In addition to a survey, we categorize existent reinforcement learning approaches based on the used method and the design of the reward mechanisms. Moreover, since communication capability is a prominent feature of social robots, we discuss and group the papers based on the communication medium used for reward formulation. Considering the importance of designing the reward function, we also provide a categorization of the papers based on the nature of the reward. This categorization includes three major themes: interactive reinforcement learning, intrinsically motivated methods, and task performance-driven methods. The benefits and challenges of reinforcement learning in social robotics, evaluation methods of the papers regarding whether or not they use subjective and algorithmic measures, a discussion in the view of real-world reinforcement learning challenges and proposed solutions, the points that remain to be explored, including the approaches that have thus far received less attention is also given in the paper. Thus, this paper aims to become a starting point for researchers interested in using and applying reinforcement learning methods in this particular research field

    Machine Autonomy : Definition, Approaches, Challenges and Research Gaps

    Get PDF
    Postprin

    Comparing Robot and Human guided Personalization: Adaptive Exercise Robots are Perceived as more Competent and Trustworthy

    Get PDF
    Schneider S, Kummert F. Comparing Robot and Human guided Personalization: Adaptive Exercise Robots are Perceived as more Competent and Trustworthy. INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS. 2020.Learning and matching a user's preference is an essential aspect of achieving a productive collaboration in long-term Human-Robot Interaction (HRI). However, there are different techniques on how to match the behavior of a robot to a user's preference. The robot can be adaptable so that a user can change the robot's behavior to one's need, or the robot can be adaptive and autonomously tries to match its behavior to the user's preference. Both types might decrease the gap between a user's preference and the actual system behavior. However, the Level of Automation (LoA) of the robot is different between both methods. Either the user controls the interaction, or the robot is in control. We present a study on the effects of different LoAs of a Socially Assistive Robot (SAR) on a user's evaluation of the system in an exercising scenario. We implemented an online preference learning system and a user-adaptable system. We conducted a between-subject design study (adaptable robot vs. adaptive robot) with 40 subjects and report our quantitative and qualitative results. The results show that users evaluate the adaptive robots as more competent, warm, and report a higher alliance. Moreover, this increased alliance is significantly mediated by the perceived competence of the system. This result provides empirical evidence for the relation between the LoA of a system, the user's perceived competence of the system, and the perceived alliance with it. Additionally, we provide evidence for a proof-of-concept that the chosen preference learning method (i.e., Double Thompson Sampling (DTS)) is suitable for online HRI

    Developmental Learning for Social Robots in Real-World Interactions

    Get PDF
    International audienceThis paper reports preliminary research work on applying developmental learning to social robotics for making human-robot interactions more instinctive and more natural. Developmental learning is an unsupervised learning strategy relying on the fact that the learning agent is intrinsically motivated, and is able to incrementally build its own representation of the world through its experiences of interaction with it. Our claim is that using developmental learning in social robots could dramatically change the way we envision human-robot interaction, notably by giving the robot an active role in the interaction building process, and even more importantly, in the way it autonomously learns suitable behaviors over time. Developmental learning appears to be an appropriate approach to develop a form of "interactional intelligence" for social robots. In this work, our goal was to set up a common framework for implementing, experimenting and evaluating developmental learning algorithms with various social robots

    Autonomous and Intrinsically Motivated Robots for Sustained Human-Robot Interaction

    Get PDF
    A challenge in using fully autonomous robots in human-robot interaction (HRI) is to design behavior that is engaging enough to encourage voluntary, long-term interaction, yet robust to the perturbations induced by human interaction. It has been repeatedly argued that intrinsic motivations (IMs) are crucial for human development, so it seems reasonable that this mechanism could produce an adaptive and developing robot, which is interesting to interact with. This thesis evaluates whether an intrinsically motivated robot can lead to sustained HRI. Recent research showed that robots which ‘appeared’ intrinsically motivated raised interest in the human interaction partner. The displayed IMs resulted from ‘unpredictably’ asking a question or from a self-disclosing statement. They were designed with the help of pre-defined scripts or teleoperation. An issue here is that this practice renders the behavior less robust toward unexpected input or requires a trained human in the loop. Instead, this thesis proposes a computational model of IM to realize fully autonomous and adaptive behavior generation in a robot. Previous work showed that predictive information maximization leads to playful, exploratory behavior in simulated robots that is robust to changes in the robot’s morphology and environment. This thesis demonstrates how to deploy the formalism on a physical robot that interacts with humans. The thesis conducted three within-subjects studies, where participants interacted with a fully autonomous Sphero BB8 robot with two behavioral regimes: one realizing an adaptive, intrinsically motivated behavior and the other being reactive, but not adaptive. The first study contributes to the idea of the overall proposed study design: the interaction needs to be designed in such a way, that participants are not given any idea of the robot’s task. The second study implements this idea, letting participants focus on answering the question of whether the robots are any different. It further contributes ideas for a more ‘challenging’ baseline behavior motivating the third and final study. Here, a systematic baseline is generated and shows that participants perceive it as almost indistinguishable and similarly animated compared to the intrinsically motivated robot. Despite the emphasis on the design of similarly perceived baseline behaviors, quantitative analyses of post-interaction questionnaires after each study showed a significantly higher perception of the dimension ‘Warmth’ for the intrinsically motivated robot compared to the baseline behavior. Warmth is considered a primary dimension for social attitude formation in social cognition. A human perceived as warm (i.e. friendly and trustworthy) experiences more positive social interactions. The Robotic Social Attribute Scale (RoSAS) implements the scale dimension Warmth for the HRI domain, which has been validated with a series of still images. Going beyond static images, this thesis provides support for the use and applicability of this scale dimension for the purpose of comparing behaviors. It shows that participants prefer to continue interacting with the robot they perceive highest in Warmth. This research opens new research avenues, in particular with respect to different physical robots and longitudinal studies, which are ought to be performed to corroborate the results presented here. However, this thesis shows the general methods presented here, which do not require a human operator in the loop, can be used to imbue robots with behavior leading to positive perception by their human interaction partners, which can yield sustained HRI
    corecore