2,467 research outputs found

    Robot Composite Learning and the Nunchaku Flipping Challenge

    Full text link
    Advanced motor skills are essential for robots to physically coexist with humans. Much research on robot dynamics and control has achieved success on hyper robot motor capabilities, but mostly through heavily case-specific engineering. Meanwhile, in terms of robot acquiring skills in a ubiquitous manner, robot learning from human demonstration (LfD) has achieved great progress, but still has limitations handling dynamic skills and compound actions. In this paper, we present a composite learning scheme which goes beyond LfD and integrates robot learning from human definition, demonstration, and evaluation. The method tackles advanced motor skills that require dynamic time-critical maneuver, complex contact control, and handling partly soft partly rigid objects. We also introduce the "nunchaku flipping challenge", an extreme test that puts hard requirements to all these three aspects. Continued from our previous presentations, this paper introduces the latest update of the composite learning scheme and the physical success of the nunchaku flipping challenge

    Exploring haptic interfacing with a mobile robot without visual feedback

    Get PDF
    Search and rescue scenarios are often complicated by low or no visibility conditions. The lack of visual feedback hampers orientation and causes significant stress for human rescue workers. The Guardians project [1] pioneered a group of autonomous mobile robots assisting a human rescue worker operating within close range. Trials were held with fire fighters of South Yorkshire Fire and Rescue. It became clear that the subjects by no means were prepared to give up their procedural routine and the feel of security they provide: they simply ignored instructions that contradicted their routines

    Combining Self-Supervised Learning and Imitation for Vision-Based Rope Manipulation

    Full text link
    Manipulation of deformable objects, such as ropes and cloth, is an important but challenging problem in robotics. We present a learning-based system where a robot takes as input a sequence of images of a human manipulating a rope from an initial to goal configuration, and outputs a sequence of actions that can reproduce the human demonstration, using only monocular images as input. To perform this task, the robot learns a pixel-level inverse dynamics model of rope manipulation directly from images in a self-supervised manner, using about 60K interactions with the rope collected autonomously by the robot. The human demonstration provides a high-level plan of what to do and the low-level inverse model is used to execute the plan. We show that by combining the high and low-level plans, the robot can successfully manipulate a rope into a variety of target shapes using only a sequence of human-provided images for direction.Comment: 8 pages, accepted to International Conference on Robotics and Automation (ICRA) 201

    Toward Dynamic Manipulation of Flexible Objects by High-Speed Robot System: From Static to Dynamic

    Get PDF
    This chapter explains dynamic manipulation of flexible objects, where the target objects to be manipulated include rope, ribbon, cloth, pizza dough, and so on. Previously, flexible object manipulation has been performed in a static or quasi-static state. Therefore, the manipulation time becomes long, and the efficiency of the manipulation is not considered to be sufficient. In order to solve these problems, we propose a novel control strategy and motion planning for achieving flexible object manipulation at high speed. The proposed strategy simplifies the flexible object dynamics. Moreover, we implemented a high-speed vision system and high-speed image processing to improve the success rate by manipulating the robot trajectory. By using this strategy, motion planning, and high-speed visual feedback, we demonstrated several tasks, including dynamic manipulation and knotting of a rope, generating a ribbon shape, dynamic folding of cloth, rope insertion, and pizza dough rotation, and we show experimental results obtained by using the high-speed robot system

    Becoming Human with Humanoid

    Get PDF
    Nowadays, our expectations of robots have been significantly increases. The robot, which was initially only doing simple jobs, is now expected to be smarter and more dynamic. People want a robot that resembles a human (humanoid) has and has emotional intelligence that can perform action-reaction interactions. This book consists of two sections. The first section focuses on emotional intelligence, while the second section discusses the control of robotics. The contents of the book reveal the outcomes of research conducted by scholars in robotics fields to accommodate needs of society and industry

    Following a Robot using a Haptic Interface without Visual Feedback

    Get PDF
    Search and rescue operations are often undertaken in dark and noisy environments in which rescue teams must rely on haptic feedback for navigation and safe exit. In this paper, we discuss designing and evaluating a haptic interface to enable a human being to follow a robot through an environment with no-visibility. We first briefly analyse the task at hand and discuss the considerations that have led to our current interface design. The second part of the paper describes our testing procedure and the results of our first informal tests. Based on these results we discuss future improvements of our design
    • …
    corecore