688 research outputs found

    Robotic Grasping of Unknown Objects

    Get PDF

    “Less is more”: Simplifying point clouds to improve grasping performance

    Get PDF
    Object grasping is a task that humans do without major concerns. This results from self learning and by observing of other skilled humans doing such task with previous information. However, grasping novel objects in unknown positions for a robot is a complex task which encounters many problems, such as sub-optimal performance rates and the time consumption. In this paper we present a method that complements the state-of-the-art grasping algorithms with two segmentation steps, the first one which removes the largest planar surface in the point cloud of the world before the grasp detector receives them and the second one that complements this segmentation with another segmentation that calculates where the object is located and segments the point cloud by executing a crop around the object. The proposed method significantly improves the grasping success rate (100% improvement over the baseline approach) and simultaneously is able to reduce the time consumption by 23%.info:eu-repo/semantics/publishedVersio

    Cumulative object categorization in clutter

    Get PDF
    In this paper we present an approach based on scene- or part-graphs for geometrically categorizing touching and occluded objects. We use additive RGBD feature descriptors and hashing of graph configuration parameters for describing the spatial arrangement of constituent parts. The presented experiments quantify that this method outperforms our earlier part-voting and sliding window classification. We evaluated our approach on cluttered scenes, and by using a 3D dataset containing over 15000 Kinect scans of over 100 objects which were grouped into general geometric categories. Additionally, color, geometric, and combined features were compared for categorization tasks

    Progressive 3D reconstruction of unknown objects using one eye-in-hand camera

    Get PDF
    Proceedings of: 2009 IEEE International Conference on Robotics and Biomimetics (ROBIO 2009) December 19-23, 2009, Guilin, ChinaThis paper presents a complete 3D-reconstruction method optimized for online object modeling in the context of object grasping by a robot hand. The proposed solution is based on images captured by an eye-in-hand camera mounted on the robot arm and is an original combination of classical but simplified reconstruction methods. The different techniques used form a process that offers fast, progressive and reactive reconstruction of the object.European Community's Seventh Framework ProgramThe research leading to these results has been partially supported by the HANDLE project, which has received funding from the European Communitity’s Seventh Framework Programme (FP7/2007-2013) under grant agreement ICT 23164

    Grasping Points Determination Using Visual Features

    Get PDF
    This paper discusses some issues for generating point of contact using visual features. To address these issues, the paper is divided into two sections: visual features extraction and grasp planning. In order to provide a suitable description of object contour, a method for grouping visual features is proposed. A very important aspect of this method is the wa

    Grasp Stability Prediction for a Dexterous Robotic Hand Combining Depth Vision and Haptic Bayesian Exploration.

    Get PDF
    Grasp stability prediction of unknown objects is crucial to enable autonomous robotic manipulation in an unstructured environment. Even if prior information about the object is available, real-time local exploration might be necessary to mitigate object modelling inaccuracies. This paper presents an approach to predict safe grasps of unknown objects using depth vision and a dexterous robot hand equipped with tactile feedback. Our approach does not assume any prior knowledge about the objects. First, an object pose estimation is obtained from RGB-D sensing; then, the object is explored haptically to maximise a given grasp metric. We compare two probabilistic methods (i.e. standard and unscented Bayesian Optimisation) against random exploration (i.e. uniform grid search). Our experimental results demonstrate that these probabilistic methods can provide confident predictions after a limited number of exploratory observations, and that unscented Bayesian Optimisation can find safer grasps, taking into account the uncertainty in robot sensing and grasp execution
    corecore