22 research outputs found

    Fast and Robust Normal Estimation for Point Clouds with Sharp Features

    Get PDF
    Proceedings of the 10th Symposium of on Geometry Processing (SGP 2012), Tallinn, Estonia, July 2012.International audienceThis paper presents a new method for estimating normals on unorganized point clouds that preserves sharp fea- tures. It is based on a robust version of the Randomized Hough Transform (RHT). We consider the filled Hough transform accumulator as an image of the discrete probability distribution of possible normals. The normals we estimate corresponds to the maximum of this distribution. We use a fixed-size accumulator for speed, statistical exploration bounds for robustness, and randomized accumulators to prevent discretization effects. We also propose various sampling strategies to deal with anisotropy, as produced by laser scans due to differences of incidence. Our experiments show that our approach offers an ideal compromise between precision, speed, and robustness: it is at least as precise and noise-resistant as state-of-the-art methods that preserve sharp features, while being almost an order of magnitude faster. Besides, it can handle anisotropy with minor speed and precision losses

    Robust Geometry Estimation using the Generalized Voronoi Covariance Measure

    Get PDF
    The Voronoi Covariance Measure of a compact set K of R^d is a tensor-valued measure that encodes geometric information on K and which is known to be resilient to Hausdorff noise but sensitive to outliers. In this article, we generalize this notion to any distance-like function delta and define the delta-VCM. We show that the delta-VCM is resilient to Hausdorff noise and to outliers, thus providing a tool to estimate robustly normals from a point cloud approximation. We present experiments showing the robustness of our approach for normal and curvature estimation and sharp feature detection

    View synthesis for pose computation

    Get PDF
    International audienceGeometrical registration of a query image with respect to a 3D model, or pose estimation, is the cornerstone of many computer vision applications. It is often based on the matching of local photometric descriptors invariant to limited viewpoint changes. However, when the query image has been acquired from a camera position not covered by the model images, pose estimation is often not accurate and sometimes even fails, precisely because of the limited invariance of descriptors. In this paper, we propose to add descriptors to the model, obtained from synthesized views associated with virtual cameras completing the covering of the scene by the real cameras. We propose an efficient strategy to localize the virtual cameras in the scene and generate valuable descriptors from synthetic views. We also discuss a guided sampling strategy for registration in this context. Experiments show that the accuracy of pose estimation is dramatically improved when large viewpoint changes makes the matching of classic descriptors a challenging task
    corecore