16 research outputs found
Robot Composite Learning and the Nunchaku Flipping Challenge
Advanced motor skills are essential for robots to physically coexist with
humans. Much research on robot dynamics and control has achieved success on
hyper robot motor capabilities, but mostly through heavily case-specific
engineering. Meanwhile, in terms of robot acquiring skills in a ubiquitous
manner, robot learning from human demonstration (LfD) has achieved great
progress, but still has limitations handling dynamic skills and compound
actions. In this paper, we present a composite learning scheme which goes
beyond LfD and integrates robot learning from human definition, demonstration,
and evaluation. The method tackles advanced motor skills that require dynamic
time-critical maneuver, complex contact control, and handling partly soft
partly rigid objects. We also introduce the "nunchaku flipping challenge", an
extreme test that puts hard requirements to all these three aspects. Continued
from our previous presentations, this paper introduces the latest update of the
composite learning scheme and the physical success of the nunchaku flipping
challenge
Robotics 2010
Without a doubt, robotics has made an incredible progress over the last decades. The vision of developing, designing and creating technical systems that help humans to achieve hard and complex tasks, has intelligently led to an incredible variety of solutions. There are barely technical fields that could exhibit more interdisciplinary interconnections like robotics. This fact is generated by highly complex challenges imposed by robotic systems, especially the requirement on intelligent and autonomous operation. This book tries to give an insight into the evolutionary process that takes place in robotics. It provides articles covering a wide range of this exciting area. The progress of technical challenges and concepts may illuminate the relationship between developments that seem to be completely different at first sight. The robotics remains an exciting scientific and engineering field. The community looks optimistically ahead and also looks forward for the future challenges and new development
Mechanical engineering challenges in humanoid robotics
Thesis (S.B.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2011.Cataloged from PDF version of thesis.Includes bibliographical references (p. 36-39).Humanoid robots are artificial constructs designed to emulate the human body in form and function. They are a unique class of robots whose anthropomorphic nature renders them particularly well-suited to interact with humans in a world designed for humans. The present work examines a subset of the plethora of engineering challenges that face modem developers of humanoid robots, with a focus on challenges that fall within the domain of mechanical engineering. The challenge of emulating human bipedal locomotion on a robotic platform is reviewed in the context of the evolutionary origins of human bipedalism and the biomechanics of walking and running. Precise joint angle control bipedal robots and passive-dynamic walkers, the two most prominent classes of modem bipedal robots, are found to have their own strengths and shortcomings. An integration of the strengths from both classes is likely to characterize the next generation of humanoid robots. The challenge of replicating human arm and hand dexterity with a robotic system is reviewed in the context of the evolutionary origins and kinematic structure of human forelimbs. Form-focused design and function-focused design, two distinct approaches to the design of modem robotic arms and hands, are found to have their own strengths and shortcomings. An integration of the strengths from both approaches is likely to characterize the next generation of humanoid robots.by Peter Guang Yi Lu.S.B
Combining Self-Supervised Learning and Imitation for Vision-Based Rope Manipulation
Manipulation of deformable objects, such as ropes and cloth, is an important
but challenging problem in robotics. We present a learning-based system where a
robot takes as input a sequence of images of a human manipulating a rope from
an initial to goal configuration, and outputs a sequence of actions that can
reproduce the human demonstration, using only monocular images as input. To
perform this task, the robot learns a pixel-level inverse dynamics model of
rope manipulation directly from images in a self-supervised manner, using about
60K interactions with the rope collected autonomously by the robot. The human
demonstration provides a high-level plan of what to do and the low-level
inverse model is used to execute the plan. We show that by combining the high
and low-level plans, the robot can successfully manipulate a rope into a
variety of target shapes using only a sequence of human-provided images for
direction.Comment: 8 pages, accepted to International Conference on Robotics and
Automation (ICRA) 201
Sensors for Robotic Hands: A Survey of State of the Art
Recent decades have seen significant progress in the field of artificial hands. Most of the
surveys, which try to capture the latest developments in this field, focused on actuation and control systems of these devices. In this paper, our goal is to provide a comprehensive survey of the sensors for artificial hands. In order to present the evolution of the field, we cover five year periods starting at the turn of the millennium. At each period, we present the robot hands with a focus on their sensor systems dividing them into categories, such as prosthetics, research devices, and industrial end-effectors.We also cover the sensors developed for robot hand usage in each era. Finally, the period between 2010 and 2015 introduces the reader to the state of the art and also hints to the future directions in the sensor development for artificial hands
Cable Manipulation with a Tactile-Reactive Gripper
Cables are complex, high dimensional, and dynamic objects. Standard
approaches to manipulate them often rely on conservative strategies that
involve long series of very slow and incremental deformations, or various
mechanical fixtures such as clamps, pins or rings. We are interested in
manipulating freely moving cables, in real time, with a pair of robotic
grippers, and with no added mechanical constraints. The main contribution of
this paper is a perception and control framework that moves in that direction,
and uses real-time tactile feedback to accomplish the task of following a
dangling cable. The approach relies on a vision-based tactile sensor, GelSight,
that estimates the pose of the cable in the grip, and the friction forces
during cable sliding. We achieve the behavior by combining two tactile-based
controllers: 1) Cable grip controller, where a PD controller combined with a
leaky integrator regulates the gripping force to maintain the frictional
sliding forces close to a suitable value; and 2) Cable pose controller, where
an LQR controller based on a learned linear model of the cable sliding dynamics
keeps the cable centered and aligned on the fingertips to prevent the cable
from falling from the grip. This behavior is possible by a reactive gripper
fitted with GelSight-based high-resolution tactile sensors. The robot can
follow one meter of cable in random configurations within 2-3 hand regrasps,
adapting to cables of different materials and thicknesses. We demonstrate a
robot grasping a headphone cable, sliding the fingers to the jack connector,
and inserting it. To the best of our knowledge, this is the first
implementation of real-time cable following without the aid of mechanical
fixtures.Comment: Accepted to RSS 202