499 research outputs found

    A new approach on human-robot collaboration with humanoid robot RH-2

    Get PDF
    This paper was originally submitted under the auspices of the CLAWAR Association. It is an extension of work presented at CLAWAR 2009: The 12th International Conference on Climbing and Walking Robots and the Support Technologies for Mobile Machines, Istanbul, Turkey.This paper presents a novel control architecture for humanoid robot RH-2. The main objective is that a robot can perform different tasks in collaboration with humans in working environments. In order to achieve this goal, two control loops have to be defined. The outer loop, called collaborative control loop, is devoted to the generation of stable motion patterns for a robot, given a specific manipulation task. The inner loop, called posture stability control loop, acts to guarantee the stability of humanoid for different poses determined by motion patterns. A case study is presented in order to show the effectiveness of the proposed control architecture.This work has been supported by the CAM Project S2009/DPI-1559/ROBOCITY2030 II, the CYCIT Project PI2004-00325 and the European Project Robot@CWE FP6-2005-IST-5

    A Whole-Body Pose Taxonomy for Loco-Manipulation Tasks

    Full text link
    Exploiting interaction with the environment is a promising and powerful way to enhance stability of humanoid robots and robustness while executing locomotion and manipulation tasks. Recently some works have started to show advances in this direction considering humanoid locomotion with multi-contacts, but to be able to fully develop such abilities in a more autonomous way, we need to first understand and classify the variety of possible poses a humanoid robot can achieve to balance. To this end, we propose the adaptation of a successful idea widely used in the field of robot grasping to the field of humanoid balance with multi-contacts: a whole-body pose taxonomy classifying the set of whole-body robot configurations that use the environment to enhance stability. We have revised criteria of classification used to develop grasping taxonomies, focusing on structuring and simplifying the large number of possible poses the human body can adopt. We propose a taxonomy with 46 poses, containing three main categories, considering number and type of supports as well as possible transitions between poses. The taxonomy induces a classification of motion primitives based on the pose used for support, and a set of rules to store and generate new motions. We present preliminary results that apply known segmentation techniques to motion data from the KIT whole-body motion database. Using motion capture data with multi-contacts, we can identify support poses providing a segmentation that can distinguish between locomotion and manipulation parts of an action.Comment: 8 pages, 7 figures, 1 table with full page figure that appears in landscape page, 2015 IEEE/RSJ International Conference on Intelligent Robots and System

    Exploiting the robot kinematic redundancy for emotion conveyance to humans as a lower priority task

    Get PDF
    Current approaches do not allow robots to execute a task and simultaneously convey emotions to users using their body motions. This paper explores the capabilities of the Jacobian null space of a humanoid robot to convey emotions. A task priority formulation has been implemented in a Pepper robot which allows the specification of a primary task (waving gesture, transportation of an object, etc.) and exploits the kinematic redundancy of the robot to convey emotions to humans as a lower priority task. The emotions, defined by Mehrabian as points in the pleasure–arousal–dominance space, generate intermediate motion features (jerkiness, activity and gaze) that carry the emotional information. A map from this features to the joints of the robot is presented. A user study has been conducted in which emotional motions have been shown to 30 participants. The results show that happiness and sadness are very well conveyed to the user, calm is moderately well conveyed, and fear is not well conveyed. An analysis on the dependencies between the motion features and the emotions perceived by the participants shows that activity correlates positively with arousal, jerkiness is not perceived by the user, and gaze conveys dominance when activity is low. The results indicate a strong influence of the most energetic motions of the emotional task and point out new directions for further research. Overall, the results show that the null space approach can be regarded as a promising mean to convey emotions as a lower priority task.Postprint (author's final draft

    TEO robot design powered by a fuel cell system

    Get PDF
    VersiĂłn pre-print (sin revisiĂłn por pares) del artĂ­culo publicado en Cybernetics and Systems: An International Journal (2012), 43(3), 163-180, accesible en linea: http://dx.doi.org/10.1080/01969722.2012.659977.This is an Author's Original Manuscript (non-peer reviewed) of an article published in Cybernetics and Systems: An International Journal (2012), 43(3), 163-180, available online: http://dx.doi.org/10.1080/01969722.2012.659977.This article deals with the design of the full-size humanoid robot TEO, an improved version of its predecessor Rh-1. The whole platform is conceived under the premise of high efficiency in terms of energy consumption and optimization. We will focus mainly on the electromechanical structure of the lower part of the prototype, which is the main component demanding energy during motion. The dimensions and weight of the robotic platform, together with its link configuration and rigidity, will be optimized. Experimental results are presented to show the validity of the design.The research leading to these results has received funding from the RoboCity2030-II-CM project (S2009/DPI-1559), funded by Programas de Actividades I+D en la Comunidad de Madrid and cofunded by Structural Funds of the EU

    Social Roles and Baseline Proxemic Preferences for a Domestic Service Robot

    Get PDF
    © The Author(s) 2014. This article is published with open access at Springerlink.com. This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited. The work described in this paper was conducted within the EU Integrated Projects LIREC (LIving with Robots and intEractive Companions, funded by the European Commission under contract numbers FP7 215554, and partly funded by the ACCOMPANY project, a part of the European Union’s Seventh Framework Programme (FP7/2007–2013) under grant agreement n287624The goal of our research is to develop socially acceptable behavior for domestic robots in a setting where a user and the robot are sharing the same physical space and interact with each other in close proximity. Specifically, our research focuses on approach distances and directions in the context of a robot handing over an object to a userPeer reviewe

    Motion for cooperation and vitality in Human-robot interaction

    Get PDF
    In social interactions, human movement is a rich source of information for all those who take part in the collaboration. In fact, a variety of intuitive messages are communicated through motion and continuously inform the partners about the future unfolding of the actions. A similar exchange of implicit information could support movement coordination in the context of Human-Robot Interaction. Also the style of an action, i.e. the way it is performed, has a strong influence on interaction between humans. The same gesture has different consequences when it is performed aggressively or kindly, and humans are very sensitive to these subtle differences in others\u2019 behaviors. During the three years of my PhD, I focused on these two aspects of human motion. In a firs study, we investigated how implicit signaling in an interaction with a humanoid robot can lead to emergent coordination in the form of automatic speed adaptation. In particular, we assessed whether different cultures \u2013 specifically Japanese and Italian \u2013 have a different impact on motor resonance and synchronization in HRI. Japanese people show a higher general acceptance toward robots when compared with Western cultures. Since acceptance, or better affiliation, is tightly connected to imitation and mimicry, we hypothesized a higher degree of speed imitation for Japanese participants when compared to Italians. In the experimental studies undertaken both in Japan and Italy,we observed that cultural differences do not impact on the natural predisposition of subjects to adapt to the robot. In a second study, we investigated how to endow a humanoid robot with behaviors expressing different vitality forms, by modulating robot action kinematics and voice. Drawing inspiration from humans, we modified actions and voice commands performed by the robot to convey an aggressive or kind attitude. In a series of experiments we demonstrated that the humanoid was consistently perceived as aggressive or kind. Human behavior changed in response to the different robot attitudes and matched the behavior of iCub, in fact participants were faster when the robot was aggressive and slower when the robot was gentle. The opportunity of humanoid behavior to express vitality enriches the array of nonverbal communication that can be exploited by robots to foster seamless interaction. Such behavior might be crucial in emergency and in authoritative situations in which the robot should instinctively be perceived as assertive and in charge, as in case of police robots or teachers

    Human-robot collaborative task planning using anticipatory brain responses

    Get PDF
    Human-robot interaction (HRI) describes scenarios in which both human and robot work as partners, sharing the same environment or complementing each other on a joint task. HRI is characterized by the need for high adaptability and flexibility of robotic systems toward their human interaction partners. One of the major challenges in HRI is task planning with dynamic subtask assignment, which is particularly challenging when subtask choices of the human are not readily accessible by the robot. In the present work, we explore the feasibility of using electroencephalogram (EEG) based neuro-cognitive measures for online robot learning of dynamic subtask assignment. To this end, we demonstrate in an experimental human subject study, featuring a joint HRI task with a UR10 robotic manipulator, the presence of EEG measures indicative of a human partner anticipating a takeover situation from human to robot or vice-versa. The present work further proposes a reinforcement learning based algorithm employing these measures as a neuronal feedback signal from the human to the robot for dynamic learning of subtask-assignment. The efficacy of this algorithm is validated in a simulation-based study. The simulation results reveal that even with relatively low decoding accuracies, successful robot learning of subtask-assignment is feasible, with around 80% choice accuracy among four subtasks within 17 minutes of collaboration. The simulation results further reveal that scalability to more subtasks is feasible and mainly accompanied with longer robot learning times. These findings demonstrate the usability of EEG-based neuro-cognitive measures to mediate the complex and largely unsolved problem of human-robot collaborative task planning

    Becoming Human with Humanoid

    Get PDF
    Nowadays, our expectations of robots have been significantly increases. The robot, which was initially only doing simple jobs, is now expected to be smarter and more dynamic. People want a robot that resembles a human (humanoid) has and has emotional intelligence that can perform action-reaction interactions. This book consists of two sections. The first section focuses on emotional intelligence, while the second section discusses the control of robotics. The contents of the book reveal the outcomes of research conducted by scholars in robotics fields to accommodate needs of society and industry

    Representation and control of coordinated-motion tasks for human-robot systems

    Get PDF
    It is challenging for robots to perform various tasks in a human environment. This is because many human-centered tasks require coordination in both hands and may often involve cooperation with another human. Although human-centered tasks require different types of coordinated movements, most of the existing methodologies have focused only on specific types of coordination. This thesis aims at the description and control of coordinated-motion tasks for human-robot systems; i.e., humanoid robots as well as multi-robot and human-robot systems. First, for bimanually coordinated-motion tasks in dual-manipulator systems, we propose the Extended-Cooperative-Task-Space (ECTS) representation, which extends the existing Cooperative-Task-Space (CTS) representation based on the kinematic models for human bimanual movements in Biomechanics. The proposed ECTS representation can represent the whole spectrum of dual-arm motion/force coordination using two sets of ECTS motion/force variables in a unified manner. The type of coordination can be easily chosen by two meaningful coefficients, and during coordinated-motion tasks, each set of variables directly describes two different aspects of coordinated motion and force behaviors. Thus, the operator can specify coordinated-motion/force tasks more intuitively in high-level descriptions, and the specified tasks can be easily reused in other situations with greater flexibility. Moreover, we present consistent procedures of using the ECTS representation for task specifications in the upper-body and lower-body subsystems of humanoid robots in order to perform manipulation and locomotion tasks, respectively. Besides, we propose and discuss performance indices derived based on the ECTS representation, which can be used to evaluate and optimize the performance of any type of dual-arm manipulation tasks. We show that using the ECTS representation for specifying both dual-arm manipulation and biped locomotion tasks can greatly simplify the motion planning process, allowing the operator to focus on high-level descriptions of those tasks. Both upper-body and lower-body task specifications are demonstrated by specifying whole-body task examples on a Hubo II+ robot carrying out dual-arm manipulation as well as biped locomotion tasks in a simulation environment. We also present the results from experiments on a dual-arm robot (Baxter) for teleoperating various types of coordinated-motion tasks using a single 6D mouse interface. The specified upper- and lower-body tasks can be considered as coordinated motions with constraints. In order to express various constraints imposed across the whole-body, we discuss the modeling of whole-body structure and the computations for robotic systems having multiple kinematic chains. Then we present a whole-body controller formulated as a quadratic programming, which can take different types of constraints into account in a prioritized manner. We validate the whole-body controller based on the simulation results on a Hubo II+ robot performing specified whole-body task examples with a number of motion and force constraints as well as actuation limits. Lastly, we discuss an extension of the ECTS representation, called Hierarchical Extended-Cooperative-Task Space (H-ECTS) framework, which uses tree-structured graphical representations for coordinated-motion tasks of multi-robot and human-robot systems. The H-ECTS framework is validated by experimental results on two Baxter robots cooperating with each other as well as with an additional human partner
    • …
    corecore