13 research outputs found

    A self-adaptive segmentation method for a point cloud

    Get PDF
    The segmentation of a point cloud is one of the key technologies for three-dimensional reconstruction, and the segmentation from three-dimensional views can facilitate reverse engineering. In this paper, we propose a self-adaptive segmentation algorithm, which can address challenges related to the region-growing algorithm, such as inconsistent or excessive segmentation. Our algorithm consists of two main steps: automatic selection of seed points according to extracted features and segmentation of the points using an improved region-growing algorithm. The benefits of our approach are the ability to select seed points without user intervention and the reduction of the influence of noise. We demonstrate the robustness and effectiveness of our algorithm on different point cloud models and the results show that the segmentation accuracy rate achieves 96%

    Multimodal cue integration through Hypotheses Verification for RGB-D object recognition and 6DOF pose estimation

    No full text
    This paper proposes an effective algorithm for recognizing objects and accurately estimating their 6DOF pose in scenes acquired by a RGB-D sensor. The proposed method is based on a combination of different recognition pipelines, each exploiting the data in a diverse manner and generating object hypotheses that are ultimately fused together in an Hypothesis Verification stage that globally enforces geometrical consistency between model hypotheses and the scene. Such a scheme boosts the overall recognition performance as it enhances the strength of the different recognition pipelines while diminishing the impact of their specific weaknesses. The proposed method outperforms the state-of-the-art on two challenging benchmark datasets for object recognition comprising 35 object models and, respectively, 176 and 353 scenes

    Model-Free segmentation and grasp selection of unknown stacked objects

    No full text
    We present a novel grasping approach for unknown stacked objects using RGB-D images of highly complex real-world scenes. Specifically, we propose a novel 3D segmentation algorithm to generate an efficient representation of the scene into segmented surfaces (known as surfels) and objects. Based on this representation, we next propose a novel grasp selection algorithm which generates potential grasp hypotheses and automatically selects the most appropriate grasp without requiring any prior information of the objects or the scene. We tested our algorithms in real-world scenarios using live video streams from Kinect and publicly available RGB-D object datasets. Our experimental results show that both our proposed segmentation and grasp selection algorithms consistently perform superior compared to the state-of-the-art methods
    corecore