6 research outputs found
Recommended from our members
Learning To Grasp
Providing robots with the ability to grasp objects has, despite decades of research, remained a challenging problem. The problem is approachable in constrained environments where there is ample prior knowledge of the scene and objects that will be manipulated. The challenge is in building systems that scale beyond specific situational instances and gracefully operate in novel conditions. In the past, heuristic and simple rule based strategies were used to accomplish tasks such as scene segmentation or reasoning about occlusion. These heuristic strategies work in constrained environments where a roboticist can make simplifying assumptions about everything from the geometries of the objects to be interacted with, level of clutter, camera position, lighting, and a myriad of other relevant variables. With these assumptions in place, it becomes tractable for a roboticist to hardcode desired behaviour and build a robotic system capable of completing repetitive tasks. These hardcoded behaviours will quickly fail if the assumptions about the environment are invalidated. In this thesis we will demonstrate how a robust grasping system can be built that is capable of operating under a more variable set of conditions without requiring significant engineering of behavior by a roboticist.
This robustness is enabled by a new found ability to empower novel machine learning techniques with massive amounts of synthetic training data. The ability of simulators to create realistic sensory data enables the generation of massive corpora of labeled training data for various grasping related tasks. The use of simulation allows for the creation of a wide variety of environments and experiences exposing the robotic system to a large number of scenarios before ever operating in the real world. This thesis demonstrates that it is now possible to build systems that work in the real world trained using deep learning on synthetic data. The sheer volume of data that can be produced via simulation enables the use of powerful deep learning techniques whose performance scales with the amount of data available. This thesis will explore how deep learning and other techniques can be used to encode these massive datasets for efficient runtime use. The ability to train and test on synthetic data allows for quick iterative development of new perception, planning and grasp execution algorithms that work in a large number of environments. Creative applications of machine learning and massive synthetic datasets are allowing robotic systems to learn skills, and move beyond repetitive hardcoded tasks
Learning suction graspability considering grasp quality and robot reachability for bin-picking
Deep learning has been widely used for inferring robust grasps. Although
human-labeled RGB-D datasets were initially used to learn grasp configurations,
preparation of this kind of large dataset is expensive. To address this
problem, images were generated by a physical simulator, and a physically
inspired model (e.g., a contact model between a suction vacuum cup and object)
was used as a grasp quality evaluation metric to annotate the synthesized
images. However, this kind of contact model is complicated and requires
parameter identification by experiments to ensure real world performance. In
addition, previous studies have not considered manipulator reachability such as
when a grasp configuration with high grasp quality is unable to reach the
target due to collisions or the physical limitations of the robot. In this
study, we propose an intuitive geometric analytic-based grasp quality
evaluation metric. We further incorporate a reachability evaluation metric. We
annotate the pixel-wise grasp quality and reachability by the proposed
evaluation metric on synthesized images in a simulator to train an
auto-encoder--decoder called suction graspability U-Net++ (SG-U-Net++).
Experiment results show that our intuitive grasp quality evaluation metric is
competitive with a physically-inspired metric. Learning the reachability helps
to reduce motion planning computation time by removing obviously unreachable
candidates. The system achieves an overall picking speed of 560 PPH (pieces per
hour).Comment: 18 pages, 2 tables, 7 figure
Recommended from our members
Improving Robotic Manipulation via Reachability, Tactile, and Spatial Awareness
Robotic grasping and manipulation remains an active area of research despite significant progress over the past decades. Many existing solutions still struggle to robustly handle difficult situations that a robot might encounter even in non-contrived settings.For example, grasping systems struggle when the object is not centrally located in the robot's workspace. Also, grasping in dynamic environments presents a unique set of challenges. A stable and feasible grasp can become infeasible as the object moves; this problem becomes pronounced when there are obstacles in the scene.
This research is inspired by the observation that object-manipulation tasks like grasping, pick-and-place or insertion require different forms of awareness. These include reachability awareness -- being aware of regions that can be reached without self-collision or collision with surrounding objects; tactile awareness-- ability to feel and grasp objects just tight enough to prevent slippage or crushing the objects; and 3D awareness -- ability to perceive size and depth in ways that makes object manipulation possible. Humans use these capabilities to achieve a high level of coordination needed for object manipulation. In this work, we develop techniques that equip robots with similar sensitivities towards realizing a reliable and capable home-assistant robot.
In this thesis we demonstrate the importance of reasoning about the robot's workspace to enable grasping systems handle more difficult settings such as picking up moving objects while avoiding surrounding obstacles. Our method encodes the notion of reachability and uses it to generate not just stable grasps but ones that are also achievable by the robot. This reachability-aware formulation effectively expands the useable workspace of the robot enabling the robot to pick up objects from difficult-to-reach locations. While recent vision-based grasping systems work reliably well achieving pickup success rate higher than 90\% in cluttered scenes, failure cases due to calibration error, slippage and occlusion were challenging. To address this, we develop a closed-loop tactile-based improvement that uses additional tactile sensing to deal with self-occlusion (a limitation of vision-based system) and adaptively tighten the robot's grip on the object-- making the grasping system tactile-aware and more reliable. This can be used as an add-on to existing grasping systems.
This adaptive tactile-based approach demonstrates the effectiveness of closed-loop feedback in the final phase of the grasping process. To achieve closed-loop manipulation all through the manipulation process, we study the value of multi-view camera systems to improve learning-based manipulation systems.
Using a multi-view Q-learning formulation, we develop a learned closed-loop manipulation algorithm for precise manipulation tasks that integrates inputs from multiple static RGB cameras to overcome self-occlusion and improve 3D understanding.
To conclude, we discuss some opportunities/ directions for future work
Precision Grasp Planning for Integrated Arm-Hand Systems
The demographic shift has caused labor shortages across the world, and it seems inevitable to rely on robots more than ever to fill the widening gap in the workforce. The robotic replacement of human workers necessitates the ability of autonomous grasping as the most natural but rather a vital part of almost all activities. Among different types of grasping, fingertip grasping attracts much attention because of its superior performance for dexterous manipulation. This thesis contributes to autonomous fingertip grasping in four areas including hand-eye calibration, grasp quality evaluation, inverse kinematics (IK) solution of robotic arm-hand systems, and simultaneous achievement of grasp planning and IK solution.
To initiate autonomous grasping, object perception is the first needed step. Stereo cameras are well-embraced for obtaining an object\u27s 3D model. However, the data acquired through a camera is expressed in the camera frame while robots only accept the commands encoded in the robot frame. This dilemma necessitates the calibration between the robot (hand) and the camera (eye) with the main goal is of estimating the camera\u27s relative pose to the robot end-effector so that the camera-acquired measurements can be converted into the robot frame. We first study the hand-eye calibration problem and achieve accurate results through a point set matching formulation. With the object\u27s 3D measurements expressed in the robot frame, the next step is finding an appropriate grasp configuration (contact points + contact normals) on the object\u27s surface. To this end, we present an efficient grasp quality evaluation method to calculate a popular wrench-based quality metric which measures the minimum distance between the wrench space origin () to the boundary of grasp wrench space (GWS). The proposed method mathematically expresses the exact boundary of GWS, which allows to evaluate the quality of the grasp with the speed that is desirable in most robotic applications. Having obtained a suitable grasp configuration, an accurate IK solution of the arm-hand system is required to perform the planned grasp. Conventionally, the IK of the robotic hand and arm are solved sequentially, which often affects the efficiency and accuracy of the IK solutions. To overcome this problem, we kinematically integrate the robotic arm and hand and propose a human-inspired Thumb-First strategy to narrow down the search space of the IK solution. Based on the Thumb-First strategy, we propose two IK solutions. Our first solution follows a hierarchical IK strategy, while our second solution formulates the arm-hand system as a hybrid parallel-serial system to achieve a higher success rate. Using these results, we propose an approach to integrate the process of grasp planning and IK solution by following a special-designed coarse-to-fine strategy to improve the overall efficiency of our approach