22 research outputs found

    Robotic grasp detection based on image processing and random forest

    Get PDF
    © 2019, The Author(s). Real-time grasp detection plays a key role in manipulation, and it is also a complex task, especially for detecting how to grasp novel objects. This paper proposes a very quick and accurate approach to detect robotic grasps. The main idea is to perform grasping of novel objects in a typical RGB-D scene view. Our goal is not to find the best grasp for every object but to obtain the local optimal grasps in candidate grasp rectangles. There are three main contributions to our detection work. Firstly, an improved graph segmentation approach is used to do objects detection and it can separate objects from the background directly and fast. Secondly, we develop a morphological image processing method to generate candidate grasp rectangles set which avoids us to search grasp rectangles globally. Finally, we train a random forest model to predict grasps and achieve an accuracy of 94.26%. The model is mainly used to score every element in our candidate grasps set and the one gets the highest score will be converted to the final grasp configuration for robots. For real-world experiments, we set up our system on a tabletop scene with multiple objects and when implementing robotic grasps, we control Baxter robot with a different inverse kinematics strategy rather than the built-in one

    Using Geometry to Detect Grasping Points on 3D Unknown Point Cloud

    Get PDF
    In this paper, we focus on the task of computing a pair of points for grasping unknown objects, given a single point cloud scene with a partial view of them. The main goal is to estimate the best pair of 3D-located points so that a gripper can perform a stable grasp over the objects in the scene with no prior knowledge of their shape. We propose a geometrical approach to find those contact points by placing them near a perpendicular cutting plane to the object’s main axis and through its centroid. During the experimentation we have found that this solution is fast enough and gives sufficiently stable grasps for being used on a real service robot.This work was funded by the Spanish Government Ministry of Economy, Industry and Competitiveness through the project DPI2015-68087-R and the predoctoral grant BES-2016-078290

    GoNet: An Approach-Constrained Generative Grasp Sampling Network

    Full text link
    Constraining the approach direction of grasps is important when picking objects in confined spaces, such as when emptying a shelf. Yet, such capabilities are not available in state-of-the-art data-driven grasp sampling methods that sample grasps all around the object. In this work, we address the specific problem of training approach-constrained data-driven grasp samplers and how to generate good grasping directions automatically. Our solution is GoNet: a generative grasp sampler that can constrain the grasp approach direction to lie close to a specified direction. This is achieved by discretizing SO(3) into bins and training GoNet to generate grasps from those bins. At run-time, the bin aligning with the second largest principal component of the observed point cloud is selected. GoNet is benchmarked against GraspNet, a state-of-the-art unconstrained grasp sampler, in an unconfined grasping experiment in simulation and on an unconfined and confined grasping experiment in the real world. The results demonstrate that GoNet achieves higher success-over-coverage in simulation and a 12%-18% higher success rate in real-world table-picking and shelf-picking tasks than the baseline.Comment: IROS 2023 submissio

    CAPGrasp: An R3×SO(2)-equivariant\mathbb{R}^3\times \text{SO(2)-equivariant} Continuous Approach-Constrained Generative Grasp Sampler

    Full text link
    We propose CAPGrasp, an R3×SO(2)-equivariant\mathbb{R}^3\times \text{SO(2)-equivariant} 6-DoF continuous approach-constrained generative grasp sampler. It includes a novel learning strategy for training CAPGrasp that eliminates the need to curate massive conditionally labeled datasets and a constrained grasp refinement technique that improves grasp poses while respecting the grasp approach directional constraints. The experimental results demonstrate that CAPGrasp is more than three times as sample efficient as unconstrained grasp samplers while achieving up to 38% grasp success rate improvement. CAPGrasp also achieves 4-10% higher grasp success rates than constrained but noncontinuous grasp samplers. Overall, CAPGrasp is a sample-efficient solution when grasps must originate from specific directions, such as grasping in confined spaces.Comment: This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessibl

    Fast geometry-based computation of grasping points on three-dimensional point clouds

    Get PDF
    Industrial and service robots deal with the complex task of grasping objects that have different shapes and which are seen from diverse points of view. In order to autonomously perform grasps, the robot must calculate where to place its robotic hand to ensure that the grasp is stable. We propose a method to find the best pair of grasping points given a three-dimensional point cloud with the partial view of an unknown object. We use a set of straightforward geometric rules to explore the cloud and propose grasping points on the surface of the object. We then adapt the pair of contacts to a multi-fingered hand used in experimentation. We prove that, after performing 500 grasps of different objects, our approach is fast, taking an average of 17.5 ms to propose contacts, while attaining a grasp success rate of 85.5%. Moreover, the method is sufficiently flexible and stable to work with objects in changing environments, such as those confronted by industrial or service robots.This work was funded by the Spanish Ministry of Economy, Industry and Competitiveness through the project DPI2015-68087-R (pre-doctoral grant BES-2016-078290) as well as the European Commission and FEDER funds through the COMMANDIA project (SOE2/P1/F0638), action supported by Interreg-V Sudoe

    Cutting Pose Prediction from Point Clouds

    Get PDF
    corecore