60,655 research outputs found

    Planning hand-arm grasping motions with human-like appearance

    Get PDF
    © 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting /republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other worksFinalista de l’IROS Best Application Paper Award a la 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems, ICROS.This paper addresses the problem of obtaining human-like motions on hand-arm robotic systems performing pick-and-place actions. The focus is set on the coordinated movements of the robotic arm and the anthropomorphic mechanical hand, with which the arm is equipped. For this, human movements performing different grasps are captured and mapped to the robot in order to compute the human hand synergies. These synergies are used to reduce the complexity of the planning phase by reducing the dimension of the search space. In addition, the paper proposes a sampling-based planner, which guides the motion planning ollowing the synergies. The introduced approach is tested in an application example and thoroughly compared with other state-of-the-art planning algorithms, obtaining better results.Peer ReviewedAward-winningPostprint (author's final draft

    A social networking-enabled framework for autonomous robot skill development

    Full text link
    University of Technology Sydney. Faculty of Engineering and Information Technology.Intelligent service robots will need to adapt to unforeseen situations when performing tasks for humans. To do this, they will be expected to continuously develop new skills. Existing frameworks that address robot learning of new skills for a particular task often follow a task-specific design approach. Task-specific design is unable to support robots to adapt new skills to new tasks. This is largely due to the inability of skill specification in task-specific design to be extended or to be easily changed. This dissertation provides an innovative task-independent framework that allows robots to develop new skills on their own. The idea is to create an online social network platform called Numbots that enables robots to learn new skills autonomously from their social circles. This platform integrates a state-of-the-art approach to learning from experience, called Constructing Skill Trees (CST), with a state-of-the-art framework for knowledge sharing, called RoboEarth. Based on this integration, a new logic model for online Robot-Robot Interaction (RRI) is developed. The principal focus of this dissertation is the analysis of, and solutions to three underlying technical challenges required to achieve the RRI model: (i) skill representation; (ii) autonomous skill recognition and sharing; and (iii) skill transfer. We focus on motion skills required to interact with and manipulate objects where a robot performs a series of motions to attain a goal given by humans. Skills formalise robot activities, which may involve an object (for example, kicking a ball, lifting a box, or passing a bottle of water to a person). Skills may also include robot activities that do not involve objects (for example, raising hands or walking forward). The first challenge concerns how to create a new skill representation that can represent robot skills independently of robot species, tasks and environments. We develop a generic robot skill representation, which characterises three key dimensions of a robot skill in the focused domain: the changing relationship, the spatial relationship and the temporal relationship between the robot and a possible object. The new representation takes a spatial-temporal perspective similar to that found in RoboEarth, and uses the concepts of “agent space” and “object space” from the CST approach. The second challenge concerns how to enable robots to autonomously recognise and share their experiences with other robots that are in their social network. We propose an effect-based skill recognition mechanism that enables robots to recognise skills based on the effects that result from their action. We introduce two types of autonomous skill recognition: (i) recognition of a chain of existing skill primitives; (ii) recognition of a chain of unknown skills. All recognised skills are generalised and packed into a JSON file to share across Numbots. The third challenge is how to enable shared generic robot skills to be interpreted by a robot learner for its own problem solving. We introduce an effect-based skill transfer mechanism, an algorithm to decompose and customise the downloaded generic robot skill into a set of executable action commands for the robot learner's own problem solving. After the introduction of three technical challenges of the RRI model and our solutions, a simulation is undertaken. It demonstrates that a skill recognised and shared by a PR2 robot can be reused and transferred by a NAO robot for a different problem solving. In addition, we also provide a series of comparisons with RoboEarth with a use case study “ServeADrink” to demonstrate the key advantages of the newly created generic robot skill representation over the limited skill representation in RoboEarth. Even though implementation of Numbots and the RRI model on a real robot remains as future work, the proposed analysis and solutions in this dissertation have demonstrated the potential to enable robots to develop new skills on their own, in the absence of human/robot demonstrators and to perform a task for which the robot was not explicitly programmed

    Human keypoint detection for close proximity human-robot interaction

    Full text link
    We study the performance of state-of-the-art human keypoint detectors in the context of close proximity human-robot interaction. The detection in this scenario is specific in that only a subset of body parts such as hands and torso are in the field of view. In particular, (i) we survey existing datasets with human pose annotation from the perspective of close proximity images and prepare and make publicly available a new Human in Close Proximity (HiCP) dataset; (ii) we quantitatively and qualitatively compare state-of-the-art human whole-body 2D keypoint detection methods (OpenPose, MMPose, AlphaPose, Detectron2) on this dataset; (iii) since accurate detection of hands and fingers is critical in applications with handovers, we evaluate the performance of the MediaPipe hand detector; (iv) we deploy the algorithms on a humanoid robot with an RGB-D camera on its head and evaluate the performance in 3D human keypoint detection. A motion capture system is used as reference. The best performing whole-body keypoint detectors in close proximity were MMPose and AlphaPose, but both had difficulty with finger detection. Thus, we propose a combination of MMPose or AlphaPose for the body and MediaPipe for the hands in a single framework providing the most accurate and robust detection. We also analyse the failure modes of individual detectors -- for example, to what extent the absence of the head of the person in the image degrades performance. Finally, we demonstrate the framework in a scenario where a humanoid robot interacting with a person uses the detected 3D keypoints for whole-body avoidance maneuvers.Comment: 8 pages 8 figure

    Robots that can adapt like animals

    Get PDF
    As robots leave the controlled environments of factories to autonomously function in more complex, natural environments, they will have to respond to the inevitable fact that they will become damaged. However, while animals can quickly adapt to a wide variety of injuries, current robots cannot "think outside the box" to find a compensatory behavior when damaged: they are limited to their pre-specified self-sensing abilities, can diagnose only anticipated failure modes, and require a pre-programmed contingency plan for every type of potential damage, an impracticality for complex robots. Here we introduce an intelligent trial and error algorithm that allows robots to adapt to damage in less than two minutes, without requiring self-diagnosis or pre-specified contingency plans. Before deployment, a robot exploits a novel algorithm to create a detailed map of the space of high-performing behaviors: This map represents the robot's intuitions about what behaviors it can perform and their value. If the robot is damaged, it uses these intuitions to guide a trial-and-error learning algorithm that conducts intelligent experiments to rapidly discover a compensatory behavior that works in spite of the damage. Experiments reveal successful adaptations for a legged robot injured in five different ways, including damaged, broken, and missing legs, and for a robotic arm with joints broken in 14 different ways. This new technique will enable more robust, effective, autonomous robots, and suggests principles that animals may use to adapt to injury

    Deep Kernels for Optimizing Locomotion Controllers

    Full text link
    Sample efficiency is important when optimizing parameters of locomotion controllers, since hardware experiments are time consuming and expensive. Bayesian Optimization, a sample-efficient optimization framework, has recently been widely applied to address this problem, but further improvements in sample efficiency are needed for practical applicability to real-world robots and high-dimensional controllers. To address this, prior work has proposed using domain expertise for constructing custom distance metrics for locomotion. In this work we show how to learn such a distance metric automatically. We use a neural network to learn an informed distance metric from data obtained in high-fidelity simulations. We conduct experiments on two different controllers and robot architectures. First, we demonstrate improvement in sample efficiency when optimizing a 5-dimensional controller on the ATRIAS robot hardware. We then conduct simulation experiments to optimize a 16-dimensional controller for a 7-link robot model and obtain significant improvements even when optimizing in perturbed environments. This demonstrates that our approach is able to enhance sample efficiency for two different controllers, hence is a fitting candidate for further experiments on hardware in the future.Comment: (Rika Antonova and Akshara Rai contributed equally
    corecore