468 research outputs found
More Than a Feeling: Learning to Grasp and Regrasp using Vision and Touch
For humans, the process of grasping an object relies heavily on rich tactile
feedback. Most recent robotic grasping work, however, has been based only on
visual input, and thus cannot easily benefit from feedback after initiating
contact. In this paper, we investigate how a robot can learn to use tactile
information to iteratively and efficiently adjust its grasp. To this end, we
propose an end-to-end action-conditional model that learns regrasping policies
from raw visuo-tactile data. This model -- a deep, multimodal convolutional
network -- predicts the outcome of a candidate grasp adjustment, and then
executes a grasp by iteratively selecting the most promising actions. Our
approach requires neither calibration of the tactile sensors, nor any
analytical modeling of contact forces, thus reducing the engineering effort
required to obtain efficient grasping policies. We train our model with data
from about 6,450 grasping trials on a two-finger gripper equipped with GelSight
high-resolution tactile sensors on each finger. Across extensive experiments,
our approach outperforms a variety of baselines at (i) estimating grasp
adjustment outcomes, (ii) selecting efficient grasp adjustments for quick
grasping, and (iii) reducing the amount of force applied at the fingers, while
maintaining competitive performance. Finally, we study the choices made by our
model and show that it has successfully acquired useful and interpretable
grasping behaviors.Comment: 8 pages. Published on IEEE Robotics and Automation Letters (RAL).
Website: https://sites.google.com/view/more-than-a-feelin
A Certified-Complete Bimanual Manipulation Planner
Planning motions for two robot arms to move an object collaboratively is a
difficult problem, mainly because of the closed-chain constraint, which arises
whenever two robot hands simultaneously grasp a single rigid object. In this
paper, we propose a manipulation planning algorithm to bring an object from an
initial stable placement (position and orientation of the object on the support
surface) towards a goal stable placement. The key specificity of our algorithm
is that it is certified-complete: for a given object and a given environment,
we provide a certificate that the algorithm will find a solution to any
bimanual manipulation query in that environment whenever one exists. Moreover,
the certificate is constructive: at run-time, it can be used to quickly find a
solution to a given query. The algorithm is tested in software and hardware on
a number of large pieces of furniture.Comment: 12 pages, 7 figures, 1 tabl
Dexterous Manipulation Graphs
We propose the Dexterous Manipulation Graph as a tool to address in-hand
manipulation and reposition an object inside a robot's end-effector. This graph
is used to plan a sequence of manipulation primitives so to bring the object to
the desired end pose. This sequence of primitives is translated into motions of
the robot to move the object held by the end-effector. We use a dual arm robot
with parallel grippers to test our method on a real system and show successful
planning and execution of in-hand manipulation
Multiform Adaptive Robot Skill Learning from Humans
Object manipulation is a basic element in everyday human lives. Robotic
manipulation has progressed from maneuvering single-rigid-body objects with
firm grasping to maneuvering soft objects and handling contact-rich actions.
Meanwhile, technologies such as robot learning from demonstration have enabled
humans to intuitively train robots. This paper discusses a new level of robotic
learning-based manipulation. In contrast to the single form of learning from
demonstration, we propose a multiform learning approach that integrates
additional forms of skill acquisition, including adaptive learning from
definition and evaluation. Moreover, going beyond state-of-the-art technologies
of handling purely rigid or soft objects in a pseudo-static manner, our work
allows robots to learn to handle partly rigid partly soft objects with
time-critical skills and sophisticated contact control. Such capability of
robotic manipulation offers a variety of new possibilities in human-robot
interaction.Comment: Accepted to 2017 Dynamic Systems and Control Conference (DSCC),
Tysons Corner, VA, October 11-1
Pick and Place Without Geometric Object Models
We propose a novel formulation of robotic pick and place as a deep
reinforcement learning (RL) problem. Whereas most deep RL approaches to robotic
manipulation frame the problem in terms of low level states and actions, we
propose a more abstract formulation. In this formulation, actions are target
reach poses for the hand and states are a history of such reaches. We show this
approach can solve a challenging class of pick-place and regrasping problems
where the exact geometry of the objects to be handled is unknown. The only
information our method requires is: 1) the sensor perception available to the
robot at test time; 2) prior knowledge of the general class of objects for
which the system was trained. We evaluate our method using objects belonging to
two different categories, mugs and bottles, both in simulation and on real
hardware. Results show a major improvement relative to a shape primitives
baseline
- …