1,069 research outputs found

    Pointcloud-based Identification of Optimal Grasping Poses for Cloth-like Deformable Objects

    Get PDF
    In this paper, the problem of identifying optimal grasping poses for cloth-like deformable objects is addressed by means of a four-steps algorithm performing the processing of the data coming from a 3D camera. The first step segments the source pointcloud, while the second step implements a wrinkledness measure able to robustly detect graspable regions of a cloth. In the third step the identification of each individual wrinkle is accomplished by fitting a piecewise curve. Finally, in the fourth step, a target grasping pose for each detected wrinkle is estimated. Compared to deep learning approaches where the availability of a good quality dataset or trained model is necessary, our general algorithm can find employment in very different scenarios with minor parameters tweaking. Results showing the application of our method to the clothes bin picking task are presented

    Saliency-guided Adaptive Seeding for Supervoxel Segmentation

    Full text link
    We propose a new saliency-guided method for generating supervoxels in 3D space. Rather than using an evenly distributed spatial seeding procedure, our method uses visual saliency to guide the process of supervoxel generation. This results in densely distributed, small, and precise supervoxels in salient regions which often contain objects, and larger supervoxels in less salient regions that often correspond to background. Our approach largely improves the quality of the resulting supervoxel segmentation in terms of boundary recall and under-segmentation error on publicly available benchmarks.Comment: 6 pages, accepted to IROS201

    Fast and Robust Detection of Fallen People from a Mobile Robot

    Full text link
    This paper deals with the problem of detecting fallen people lying on the floor by means of a mobile robot equipped with a 3D depth sensor. In the proposed algorithm, inspired by semantic segmentation techniques, the 3D scene is over-segmented into small patches. Fallen people are then detected by means of two SVM classifiers: the first one labels each patch, while the second one captures the spatial relations between them. This novel approach showed to be robust and fast. Indeed, thanks to the use of small patches, fallen people in real cluttered scenes with objects side by side are correctly detected. Moreover, the algorithm can be executed on a mobile robot fitted with a standard laptop making it possible to exploit the 2D environmental map built by the robot and the multiple points of view obtained during the robot navigation. Additionally, this algorithm is robust to illumination changes since it does not rely on RGB data but on depth data. All the methods have been thoroughly validated on the IASLAB-RGBD Fallen Person Dataset, which is published online as a further contribution. It consists of several static and dynamic sequences with 15 different people and 2 different environments
    • …
    corecore