7 research outputs found

    The human in the loop Perspectives and challenges for RoboCup 2050

    Get PDF
    © 2024 The Author(s). This is an open access article distributed under the terms of the Creative Commons Attribution License (CC BY), https://creativecommons.org/licenses/by/4.0/Robotics researchers have been focusing on developing autonomous and human-like intelligent robots that are able to plan, navigate, manipulate objects, and interact with humans in both static and dynamic environments. These capabilities, however, are usually developed for direct interactions with people in controlled environments, and evaluated primarily in terms of human safety. Consequently, human-robot interaction (HRI) in scenarios with no intervention of technical personnel is under-explored. However, in the future, robots will be deployed in unstructured and unsupervised environments where they will be expected to work unsupervised on tasks which require direct interaction with humans and may not necessarily be collaborative. Developing such robots requires comparing the effectiveness and efficiency of similar design approaches and techniques. Yet, issues regarding the reproducibility of results, comparing different approaches between research groups, and creating challenging milestones to measure performance and development over time make this difficult. Here we discuss the international robotics competition called RoboCup as a benchmark for the progress and open challenges in AI and robotics development. The long term goal of RoboCup is developing a robot soccer team that can win against the world’s best human soccer team by 2050. We selected RoboCup because it requires robots to be able to play with and against humans in unstructured environments, such as uneven fields and natural lighting conditions, and it challenges the known accepted dynamics in HRI. Considering the current state of robotics technology, RoboCup’s goal opens up several open research questions to be addressed by roboticists. In this paper, we (a) summarise the current challenges in robotics by using RoboCup development as an evaluation metric, (b) discuss the state-of-the-art approaches to these challenges and how they currently apply to RoboCup, and (c) present a path for future development in the given areas to meet RoboCup’s goal of having robots play soccer against and with humans by 2050.Peer reviewe

    The human in the loop Perspectives and challenges for RoboCup 2050

    Get PDF
    Robotics researchers have been focusing on developing autonomous and human-like intelligent robots that are able to plan, navigate, manipulate objects, and interact with humans in both static and dynamic environments. These capabilities, however, are usually developed for direct interactions with people in controlled environments, and evaluated primarily in terms of human safety. Consequently, human-robot interaction (HRI) in scenarios with no intervention of technical personnel is under-explored. However, in the future, robots will be deployed in unstructured and unsupervised environments where they will be expected to work unsupervised on tasks which require direct interaction with humans and may not necessarily be collaborative. Developing such robots requires comparing the effectiveness and efficiency of similar design approaches and techniques. Yet, issues regarding the reproducibility of results, comparing different approaches between research groups, and creating challenging milestones to measure performance and development over time make this difficult. Here we discuss the international robotics competition called RoboCup as a benchmark for the progress and open challenges in AI and robotics development. The long term goal of RoboCup is developing a robot soccer team that can win against the world’s best human soccer team by 2050. We selected RoboCup because it requires robots to be able to play with and against humans in unstructured environments, such as uneven fields and natural lighting conditions, and it challenges the known accepted dynamics in HRI. Considering the current state of robotics technology, RoboCup’s goal opens up several open research questions to be addressed by roboticists. In this paper, we (a) summarise the current challenges in robotics by using RoboCup development as an evaluation metric, (b) discuss the state-of-the-art approaches to these challenges and how they currently apply to RoboCup, and (c) present a path for future development in the given areas to meet RoboCup’s goal of having robots play soccer against and with humans by 2050.</p

    The Novelty in the Uncanny : Designing Interactions to Change First Impressions

    No full text
    In 1970, Japanese researcher Masahiro Mori published a seminal paper where he hypothesized that robots that appear human-like but are still distinguishable from being human would not attract people towards them, but instead cause an uncanny sensation. This phenomenon, known as the uncanny valley effect, has been widely studied within the social robotics community, and a multitude of experiments have since been conducted supporting Mori's hypothesis. The specifics of a robot's appearance and behavior leading to such an uncanny sensation, however, remain an open research question and require further study. These gaps in the causal relationship between uncanny feelings and a robot's design have lead uncanniness being increasingly used to explain any lack of enthusiasm towards robots, both in the scientific community and the general public. It is then often implicitly assumed that uncanny feelings towards a robot have damaging consequences for long-term human-robot interaction. Most empirical studies on the subject, however, focus on still images or short video clips of robots and participants are only exposed to these stimuli for small frames of time. The current literature on the uncanny valley does not thus allow a conclusion to be drawn about the persistence of uncanny feelings. This thesis addresses this gap in the body of knowledge by implementing interactive scenarios and performing a series of empirical investigations to study the development of people's uncanny feelings towards social robots over the course of one or several such interactive encounters with them. The findings suggest that novelty plays an important role in the feeling of uncanniness: Merely interacting with a robot for a brief period and thus giving human observers access to the robot's full behavioral stream lowers their rating of uncanny feelings towards the robot as compared to how they perceive it at first sight. Furthermore, repeated interactions with a robot can further lower uncanny impressions. These results contribute to the field of human-robot interaction, as they posit that increased exposure may result in limited feelings of uncanniness. This, in turn, potentially reduces the impact of uncanny feelings on long-term interactive encounters with robots. Instead of focusing on reducing the elicitation of uncanny first impressions, it may thus be more sustainable to further study how interactions can help people efficiently get to know a robot and overcome their initial reluctance towards it

    The Novelty in the Uncanny : Designing Interactions to Change First Impressions

    No full text
    In 1970, Japanese researcher Masahiro Mori published a seminal paper where he hypothesized that robots that appear human-like but are still distinguishable from being human would not attract people towards them, but instead cause an uncanny sensation. This phenomenon, known as the uncanny valley effect, has been widely studied within the social robotics community, and a multitude of experiments have since been conducted supporting Mori's hypothesis. The specifics of a robot's appearance and behavior leading to such an uncanny sensation, however, remain an open research question and require further study. These gaps in the causal relationship between uncanny feelings and a robot's design have lead uncanniness being increasingly used to explain any lack of enthusiasm towards robots, both in the scientific community and the general public. It is then often implicitly assumed that uncanny feelings towards a robot have damaging consequences for long-term human-robot interaction. Most empirical studies on the subject, however, focus on still images or short video clips of robots and participants are only exposed to these stimuli for small frames of time. The current literature on the uncanny valley does not thus allow a conclusion to be drawn about the persistence of uncanny feelings. This thesis addresses this gap in the body of knowledge by implementing interactive scenarios and performing a series of empirical investigations to study the development of people's uncanny feelings towards social robots over the course of one or several such interactive encounters with them. The findings suggest that novelty plays an important role in the feeling of uncanniness: Merely interacting with a robot for a brief period and thus giving human observers access to the robot's full behavioral stream lowers their rating of uncanny feelings towards the robot as compared to how they perceive it at first sight. Furthermore, repeated interactions with a robot can further lower uncanny impressions. These results contribute to the field of human-robot interaction, as they posit that increased exposure may result in limited feelings of uncanniness. This, in turn, potentially reduces the impact of uncanny feelings on long-term interactive encounters with robots. Instead of focusing on reducing the elicitation of uncanny first impressions, it may thus be more sustainable to further study how interactions can help people efficiently get to know a robot and overcome their initial reluctance towards it

    I Can See It in Your Eyes : Gaze as an Implicit Cue of Uncanniness and Task Performance in Repeated Interactions With Robots

    No full text
    Over the past years, extensive research has been dedicated to developing robust platforms and data-driven dialog models to support long-term human-robot interactions. However, little is known about how people's perception of robots and engagement with them develop over time and how these can be accurately assessed through implicit and continuous measurement techniques. In this paper, we explore this by involving participants in three interaction sessions with multiple days of zero exposure in between. Each session consists of a joint task with a robot as well as two short social chats with it before and after the task. We measure participants' gaze patterns with a wearable eye-tracker and gauge their perception of the robot and engagement with it and the joint task using questionnaires. Results disclose that aversion of gaze in a social chat is an indicator of a robot's uncanniness and that the more people gaze at the robot in a joint task, the worse they perform. In contrast with most HRI literature, our results show that gaze toward an object of shared attention, rather than gaze toward a robotic partner, is the most meaningful predictor of engagement in a joint task. Furthermore, the analyses of gaze patterns in repeated interactions disclose that people's mutual gaze in a social chat develops congruently with their perceptions of the robot over time. These are key findings for the HRI community as they entail that gaze behavior can be used as an implicit measure of people's perception of robots in a social chat and of their engagement and task performance in a joint task

    Conversational Programming for Collaborative Robots

    No full text
    39th edition of the IEEE International Conference on Robotics and Automation (ICRA 2022)International audienceIn this position paper, we describe a novel approach of programming industrial robots via conversational dialogue. We believe that conversational programming, unlike other interfaces for humans to reprogram industrial robots, will enable novices to teach a robot complex new procedures without any knowledge of programming required. Using a sample conversation between a human User and an industrial robotic arm, we discuss how our approach differs from other (spoken) human-robot interfaces and why it has the potential to solve difficulties of such interfaces when it comes to learning to abstract from specific examples. We also describe the unique challenges conversational programming involves and how, once these are solved, it could be integrated into industrial settings of the future

    Does the Goal Matter? Emotion Recognition Tasks Can Change the Social Value of Facial Mimicry towards Artificial Agents

    Get PDF
    In this paper, we present a study aimed at understanding whether the embodiment and humanlikeness of an artificial agent can affect people's spontaneous and instructed mimicry of its facial expressions. The study followed a mixed experimental design and revolved around an emotion recognition task. Participants were randomly assigned to one level of humanlikeness (between-subject variable: humanlike, characterlike, or morph facial texture of the artificial agents) and observed the facial expressions displayed by three artificial agents differing in embodiment (within-subject variable: video-recorded robot, physical robot, and virtual agent) and a human (control). To study both spontaneous and instructed facial mimicry, we divided the experimental sessions into two phases. In the first phase, we asked participants to observe and recognize the emotions displayed by the agents. In the second phase, we asked them to look at the agents' facial expressions, replicate their dynamics as closely as possible, and then identify the observed emotions. In both cases, we assessed participants' facial expressions with an automated Action Unit (AU) intensity detector. Contrary to our hypotheses, our results disclose that the agent that was perceived as the least uncanny, and most anthropomorphic, likable, and co-present, was the one spontaneously mimicked the least. Moreover, they show that instructed facial mimicry negatively predicts spontaneous facial mimicry. Further exploratory analyses revealed that spontaneous facial mimicry appeared when participants were less certain of the emotion they recognized. Hence, we postulate that an emotion recognition goal can flip the social value of facial mimicry as it transforms a likable artificial agent into a distractor. Further work is needed to corroborate this hypothesis. Nevertheless, our findings shed light on the functioning of human-agent and human-robot mimicry in emotion recognition tasks and help us to unravel the relationship between facial mimicry, liking, and rapport
    corecore