137 research outputs found

    Polylidar3D -- Fast Polygon Extraction from 3D Data

    Full text link
    Flat surfaces captured by 3D point clouds are often used for localization, mapping, and modeling. Dense point cloud processing has high computation and memory costs making low-dimensional representations of flat surfaces such as polygons desirable. We present Polylidar3D, a non-convex polygon extraction algorithm which takes as input unorganized 3D point clouds (e.g., LiDAR data), organized point clouds (e.g., range images), or user-provided meshes. Non-convex polygons represent flat surfaces in an environment with interior cutouts representing obstacles or holes. The Polylidar3D front-end transforms input data into a half-edge triangular mesh. This representation provides a common level of input data abstraction for subsequent back-end processing. The Polylidar3D back-end is composed of four core algorithms: mesh smoothing, dominant plane normal estimation, planar segment extraction, and finally polygon extraction. Polylidar3D is shown to be quite fast, making use of CPU multi-threading and GPU acceleration when available. We demonstrate Polylidar3D's versatility and speed with real-world datasets including aerial LiDAR point clouds for rooftop mapping, autonomous driving LiDAR point clouds for road surface detection, and RGBD cameras for indoor floor/wall detection. We also evaluate Polylidar3D on a challenging planar segmentation benchmark dataset. Results consistently show excellent speed and accuracy.Comment: 40 page

    Constraint-based automated reconstruction of grape bunches from 3D range data for high-throughput phenotyping

    Get PDF
    With increasing global population, the resources for agriculture required to feed the growing number of people are becoming scarce. Estimates expect that by 2050, 60 % more food will be necessary. Nowadays, 70 % of fresh water is used by agriculture and experts see no potential for new land to use for crop plants. This means that existing land has to be used efficiently in a sustainable way. To support this, plant breeders aim at the improvement of yield, quality, disease-resistance, and other important characteristics of the crops. Reports show that grapevine cultivation uses more than three times of the amount of fungicides than the cultivation of fruit trees or vegetables. This is caused by grapevine being prone to various fungal diseases and pests that quickly spread over fields. A loose grape bunch architecture is one of the most important physical barriers that make the establishment of a fungal infection less likely. The grape bunch architecture is mostly defined by the inner stem skeleton. The phenotyping of grape bunches refers to the measurement of the phenotypes, i.e., the observable traits of a plant, like the diameter of berries or the lengths of stems. Because of their perishable nature, grape bunches have to be processed in a relatively short time. On the other hand, genetic analyses require data from a large number of them. Manual phenotyping is error-prone and highly labor- and time-intensive, yielding the need for automated, high-throughput methods. The objective of this thesis is to develop a completely automated pipeline that gets as input a 3D pointcloud showing a grape bunch and computes a 3D reconstruction of the complete grape bunch, including the inner stem skeleton. The result is a 3D estimation of the grape bunch that represents not only dimensions (e.g. berry diameters) or statistics (e.g. the number of berries), but the geometry and topology as well. All architectural (i.e., geometrical and topological) traits can be derived from this complete 3D reconstruction. We aim at high-throughput phenotyping by automatizing all steps and removing any requirement for interaction with the user, while still providing an interface for a detailed visualization and possible adjustments of the parameters. There are several challenges to this task: ripe grape bunches are subject to a high amount of self-occlusion, rendering a direct reconstruction of the stem skeleton impossible. The stem skeleton structure is complex, thus, the manual creation of training data is hard. We aim at a cross-cultivation approach and there is high variability between cultivars and even between grape bunches of the same cultivar. Thus, we cannot rely on statistical distributions for single plant organ dimensions. We employ geometrical and topological constraints to meet the challenge of cross-cultivar optimization and foster efficient sampling of infinitely large hypotheses spaces, resulting in Pearson correlation coefficients between 0.7 and 0.9 for established traits traditionally used by breeders. The active working time is reduced by a factor of 12. We evaluate the pipeline for the application on scans taken in a lab environment and in the field

    Machine Vision Identification of Plants

    Get PDF
    • …
    corecore