110 research outputs found
Manipulating Highly Deformable Materials Using a Visual Feedback Dictionary
The complex physical properties of highly deformable materials such as
clothes pose significant challenges fanipulation systems. We present a novel
visual feedback dictionary-based method for manipulating defoor autonomous
robotic mrmable objects towards a desired configuration. Our approach is based
on visual servoing and we use an efficient technique to extract key features
from the RGB sensor stream in the form of a histogram of deformable model
features. These histogram features serve as high-level representations of the
state of the deformable material. Next, we collect manipulation data and use a
visual feedback dictionary that maps the velocity in the high-dimensional
feature space to the velocity of the robotic end-effectors for manipulation. We
have evaluated our approach on a set of complex manipulation tasks and
human-robot manipulation tasks on different cloth pieces with varying material
characteristics.Comment: The video is available at goo.gl/mDSC4
DeformerNet: Learning Bimanual Manipulation of 3D Deformable Objects
Applications in fields ranging from home care to warehouse fulfillment to
surgical assistance require robots to reliably manipulate the shape of 3D
deformable objects. Analytic models of elastic, 3D deformable objects require
numerous parameters to describe the potentially infinite degrees of freedom
present in determining the object's shape. Previous attempts at performing 3D
shape control rely on hand-crafted features to represent the object shape and
require training of object-specific control models. We overcome these issues
through the use of our novel DeformerNet neural network architecture, which
operates on a partial-view point cloud of the manipulated object and a point
cloud of the goal shape to learn a low-dimensional representation of the object
shape. This shape embedding enables the robot to learn a visual servo
controller that computes the desired robot end-effector action to iteratively
deform the object toward the target shape. We demonstrate both in simulation
and on a physical robot that DeformerNet reliably generalizes to object shapes
and material stiffness not seen during training. Crucially, using DeformerNet,
the robot successfully accomplishes three surgical sub-tasks: retraction
(moving tissue aside to access a site underneath it), tissue wrapping (a
sub-task in procedures like aortic stent placements), and connecting two
tubular pieces of tissue (a sub-task in anastomosis).Comment: Submitted to IEEE Transactions on Robotics (T-RO). 18 pages, 25
figures. arXiv admin note: substantial text overlap with arXiv:2110.0468
Hierarchical Reinforcement Learning of Multiple Grasping Strategies with Human Instructions
Grasping is an essential component for robotic manipulation and has been investigated for decades. Prior work on grasping often assumes that a sufficient amount of training data is available for learning and planning robotic grasps. However, since constructing such an exhaustive training dataset is very challenging in practice, it is desirable that a robotic system can autonomously learn and improves its grasping strategy. In this paper, we address this problem using reinforcement learning. Although recent work has presented autonomous data collection through trial and error, such methods are often limited to a single grasp type, e.g., vertical pinch grasp. We present a hierarchical policy search approach for learning multiple grasping strategies. Our framework autonomously constructs a database of grasping motions and point clouds of objects to learn multiple grasping types autonomously. We formulate the problem of selecting the grasp location and grasp policy as a bandit problem, which can be interpreted as a variant of active learning. We applied our reinforcement learning to grasping both rigid and deformable objects. The experimental results show that our framework autonomously learns and improves its performance through trial and error and can grasp previously unseen objects with a high accuracy
- …