4,892 research outputs found

    Application of the Bilateral Filter for the Reconstruction of Spiral Bevel Gear Tooth Surfaces From Point Clouds

    Get PDF
    Reconstruction of gear tooth surfaces from point clouds obtained by noncontact metrology machines constitutes a promising step forward not only for a fast gear inspection but also for reverse engineering and virtual testing and analysis of gear drives. In this article, a new methodology to reconstruct spiral bevel gear tooth surfaces from point clouds obtained by noncontact metrology machines is proposed. The need of application of a filtering process to the point clouds before the process of reconstruction of the gear tooth surfaces has been revealed. Hence, the bilateral filter commonly used for 3D object recognition has been applied and integrated in the proposed methodology. The shape of the contact patterns and the level of the unloaded functions of transmission errors are considered as the criteria to select the appropriate settings of the bilateral filter. The results of the tooth contact analysis of the reconstructed gear tooth surfaces show a good agreement with the design ones. However, stress analyses performed with reconstructed gear tooth surfaces reveal that the maximum level of contact pressures is overestimated. A numerical example based on a spiral bevel gear drive is presented.The authors express their deep gratitude to the Spanish Ministry of Economy, Industry and Competitiveness (MINECO), the Spanish State Research Agency (AEI), and the European Fund for Regional Development (FEDER) for the financial support of research project DPI2017-84677-P

    Structured Light-Based 3D Reconstruction System for Plants.

    Get PDF
    Camera-based 3D reconstruction of physical objects is one of the most popular computer vision trends in recent years. Many systems have been built to model different real-world subjects, but there is lack of a completely robust system for plants. This paper presents a full 3D reconstruction system that incorporates both hardware structures (including the proposed structured light system to enhance textures on object surfaces) and software algorithms (including the proposed 3D point cloud registration and plant feature measurement). This paper demonstrates the ability to produce 3D models of whole plants created from multiple pairs of stereo images taken at different viewing angles, without the need to destructively cut away any parts of a plant. The ability to accurately predict phenotyping features, such as the number of leaves, plant height, leaf size and internode distances, is also demonstrated. Experimental results show that, for plants having a range of leaf sizes and a distance between leaves appropriate for the hardware design, the algorithms successfully predict phenotyping features in the target crops, with a recall of 0.97 and a precision of 0.89 for leaf detection and less than a 13-mm error for plant size, leaf size and internode distance

    Geometrically guided and confidence-based point cloud denoising

    Get PDF
    The generation of photogrammetric point clouds from satellite images is often based on image correlation techniques. Correlation errors can arise for a wide variety of reasons: transient objects, homogeneous areas, shadows, and surface discontinuities. Therefore, a simple 3D Gaussian distribution at the point cloud level is not an appropriate model. In this paper, we propose a new point cloud denoising method integrated into the Multiview Stereo Pipeline CARS, dedicated to satellite imagery. Building upon bilateral filtering principles, our approach introduces a novel utilization of color information, confidence estimation and geometric constraints alongside point positions and normals. While the use of point color increases the level of detail, the addition of geometric constraints and confidence awareness guides processing towards a realistic solution. We propose an ablation study and compare our solution against a previously established bilateral filter with LiDAR data as ground truth

    3D scanning of cultural heritage with consumer depth cameras

    Get PDF
    Three dimensional reconstruction of cultural heritage objects is an expensive and time-consuming process. Recent consumer real-time depth acquisition devices, like Microsoft Kinect, allow very fast and simple acquisition of 3D views. However 3D scanning with such devices is a challenging task due to the limited accuracy and reliability of the acquired data. This paper introduces a 3D reconstruction pipeline suited to use consumer depth cameras as hand-held scanners for cultural heritage objects. Several new contributions have been made to achieve this result. They include an ad-hoc filtering scheme that exploits the model of the error on the acquired data and a novel algorithm for the extraction of salient points exploiting both depth and color data. Then the salient points are used within a modified version of the ICP algorithm that exploits both geometry and color distances to precisely align the views even when geometry information is not sufficient to constrain the registration. The proposed method, although applicable to generic scenes, has been tuned to the acquisition of sculptures and in this connection its performance is rather interesting as the experimental results indicate

    Total Denoising: Unsupervised Learning of 3D Point Cloud Cleaning

    Get PDF
    We show that denoising of 3D point clouds can be learned unsupervised, directly from noisy 3D point cloud data only. This is achieved by extending recent ideas from learning of unsupervised image denoisers to unstructured 3D point clouds. Unsupervised image denoisers operate under the assumption that a noisy pixel observation is a random realization of a distribution around a clean pixel value, which allows appropriate learning on this distribution to eventually converge to the correct value. Regrettably, this assumption is not valid for unstructured points: 3D point clouds are subject to total noise, i. e., deviations in all coordinates, with no reliable pixel grid. Thus, an observation can be the realization of an entire manifold of clean 3D points, which makes a na\"ive extension of unsupervised image denoisers to 3D point clouds impractical. Overcoming this, we introduce a spatial prior term, that steers converges to the unique closest out of the many possible modes on a manifold. Our results demonstrate unsupervised denoising performance similar to that of supervised learning with clean data when given enough training examples - whereby we do not need any pairs of noisy and clean training data.Comment: Proceedings of ICCV 201
    • …
    corecore