429 research outputs found

    Fast Object Pose Estimation Using Adaptive Threshold for Bin-Picking

    Get PDF
    Robotic bin-picking is a common process in modern manufacturing, logistics, and warehousing that aims to pick-up known or unknown objects with random poses out of a bin by using a robot-camera system. Rapid and accurate object pose estimation pipelines have become an escalating issue for robot picking in recent years. In this paper, a fast 6-DoF (degrees of freedom) pose estimation pipeline for random bin-picking is proposed in which the pipeline is capable of recognizing different types of objects in various cluttered scenarios and uses an adaptive threshold segment strategy to accelerate estimation and matching for the robot picking task. Particularly, our proposed method can be effectively trained with fewer samples by introducing the geometric properties of objects such as contour, normal distribution, and curvature. An experimental setup is designed with a Kinova 6-Dof robot and an Ensenso industrial 3D camera for evaluating our proposed methods with respect to four different objects. The results indicate that our proposed method achieves a 91.25% average success rate and a 0.265s average estimation time, which sufficiently demonstrates that our approach provides competitive results for fast objects pose estimation and can be applied to robotic random bin-picking tasks

    Fast Object Learning and Dual-arm Coordination for Cluttered Stowing, Picking, and Packing

    Full text link
    Robotic picking from cluttered bins is a demanding task, for which Amazon Robotics holds challenges. The 2017 Amazon Robotics Challenge (ARC) required stowing items into a storage system, picking specific items, and packing them into boxes. In this paper, we describe the entry of team NimbRo Picking. Our deep object perception pipeline can be quickly and efficiently adapted to new items using a custom turntable capture system and transfer learning. It produces high-quality item segments, on which grasp poses are found. A planning component coordinates manipulation actions between two robot arms, minimizing execution time. The system has been demonstrated successfully at ARC, where our team reached second places in both the picking task and the final stow-and-pick task. We also evaluate individual components.Comment: In: Proceedings of the International Conference on Robotics and Automation (ICRA) 201

    Multimodal Grasp Planner for Hybrid Grippers in Cluttered Scenes

    Get PDF
    Grasping a variety of objects is still an open problem in robotics, especially for cluttered scenarios. Multimodal grasping has been recognized as a promising strategy to improve the manipulation capabilities of a robotic system. This work presents a novel grasp planning algorithm for hybrid grippers that allows for multiple grasping modalities. In particular, the planner manages two-finger grasps, single or double suction grasps, and magnetic grasps. Grasps for different modalities are geometrically computed based on the cuboid and the material properties of the objects in the clutter. The presented framework is modular and can leverage any 6D pose estimation or material segmentation network as far as they satisfy the required interface. Furthermore, the planner can be applied to any (hybrid) gripper, provided the gripper clearance, finger width, and suction diameter. The approach is fast and has a low computational burden, as it uses geometric computations for grasp synthesis and selection. The performance of the system has been assessed with an experimental campaign in three manipulation scenarios of increasing difficulty using the objects of the YCB dataset and the DLR hybrid-compliant gripper

    Part localization for robotic manipulation

    Get PDF
    The new generation of collaborative robots allows the use of small robot arms working with human workers, e.g. the YuMi robot, a dual 7-DOF robot arms designed for precise manipulation of small objects. For the further acceptance of such a robot in the industry, some methods and sensors systems have to be developed to allow them to perform a task such as grasping a specific object. If the robot wants to grasp an object, it has to localize the object relative to itself. This is a task of object recognition in computer vision, the art of localizing predefined objects in image sensor data. This master thesis presents a pipeline for object recognition of a single isolated model in point cloud. The system uses point cloud data generated from a 3D CAD model and describes its characteristics using local feature descriptors. These are then matched with the descriptors of the point cloud data from the scene to find the 6-DoF pose of the model in the robot coordinate frame. This initial pose estimation is then refined by a registration method such as ICP. A robot-camera calibration is performed also. The contributions of this thesis are as follows: The system uses FPFH (Fast Point Feature Histogram) for describing the local region and a hypothesize-and-test paradigm, e.g. RANSAC in the matching process. In contrast to several approaches, those whose rely on Point Pair Features as feature descriptors and a geometry hashing, e.g. voting-scheme as the matching process.The new generation of collaborative robots allows the use of small robot arms working with human workers, e.g. the YuMi robot, a dual 7-DOF robot arms designed for precise manipulation of small objects. For the further acceptance of such a robot in the industry, some methods and sensors systems have to be developed to allow them to perform a task such as grasping a specific object. If the robot wants to grasp an object, it has to localize the object relative to itself. This is a task of object recognition in computer vision, the art of localizing predefined objects in image sensor data. This master thesis presents a pipeline for object recognition of a single isolated model in point cloud. The system uses point cloud data generated from a 3D CAD model and describes its characteristics using local feature descriptors. These are then matched with the descriptors of the point cloud data from the scene to find the 6-DoF pose of the model in the robot coordinate frame. This initial pose estimation is then refined by a registration method such as ICP. A robot-camera calibration is performed also. The contributions of this thesis are as follows: The system uses FPFH (Fast Point Feature Histogram) for describing the local region and a hypothesize-and-test paradigm, e.g. RANSAC in the matching process. In contrast to several approaches, those whose rely on Point Pair Features as feature descriptors and a geometry hashing, e.g. voting-scheme as the matching process
    corecore