953 research outputs found

    AgriColMap: Aerial-Ground Collaborative 3D Mapping for Precision Farming

    Full text link
    The combination of aerial survey capabilities of Unmanned Aerial Vehicles with targeted intervention abilities of agricultural Unmanned Ground Vehicles can significantly improve the effectiveness of robotic systems applied to precision agriculture. In this context, building and updating a common map of the field is an essential but challenging task. The maps built using robots of different types show differences in size, resolution and scale, the associated geolocation data may be inaccurate and biased, while the repetitiveness of both visual appearance and geometric structures found within agricultural contexts render classical map merging techniques ineffective. In this paper we propose AgriColMap, a novel map registration pipeline that leverages a grid-based multimodal environment representation which includes a vegetation index map and a Digital Surface Model. We cast the data association problem between maps built from UAVs and UGVs as a multimodal, large displacement dense optical flow estimation. The dominant, coherent flows, selected using a voting scheme, are used as point-to-point correspondences to infer a preliminary non-rigid alignment between the maps. A final refinement is then performed, by exploiting only meaningful parts of the registered maps. We evaluate our system using real world data for 3 fields with different crop species. The results show that our method outperforms several state of the art map registration and matching techniques by a large margin, and has a higher tolerance to large initial misalignments. We release an implementation of the proposed approach along with the acquired datasets with this paper.Comment: Published in IEEE Robotics and Automation Letters, 201

    Holistic Multi-View Building Analysis in the Wild with Projection Pooling

    Get PDF
    We address six different classification tasks related to fine-grained building attributes: construction type, number of floors, pitch and geometry of the roof, facade material, and occupancy class. Tackling such a remote building analysis problem became possible only recently due to growing large-scale datasets of urban scenes. To this end, we introduce a new benchmarking dataset, consisting of 49426 images (top-view and street-view) of 9674 buildings. These photos are further assembled, together with the geometric metadata. The dataset showcases various real-world challenges, such as occlusions, blur, partially visible objects, and a broad spectrum of buildings. We propose a new projection pooling layer, creating a unified, top-view representation of the top-view and the side views in a high-dimensional space. It allows us to utilize the building and imagery metadata seamlessly. Introducing this layer improves classification accuracy -- compared to highly tuned baseline models -- indicating its suitability for building analysis.Comment: Accepted for publication at the 35th AAAI Conference on Artificial Intelligence (AAAI 2021

    Automated 3D model generation for urban environments [online]

    Get PDF
    Abstract In this thesis, we present a fast approach to automated generation of textured 3D city models with both high details at ground level and complete coverage for birds-eye view. A ground-based facade model is acquired by driving a vehicle equipped with two 2D laser scanners and a digital camera under normal traffic conditions on public roads. One scanner is mounted horizontally and is used to determine the approximate component of relative motion along the movement of the acquisition vehicle via scan matching; the obtained relative motion estimates are concatenated to form an initial path. Assuming that features such as buildings are visible from both ground-based and airborne view, this initial path is globally corrected by Monte-Carlo Localization techniques using an aerial photograph or a Digital Surface Model as a global map. The second scanner is mounted vertically and is used to capture the 3D shape of the building facades. Applying a series of automated processing steps, a texture-mapped 3D facade model is reconstructed from the vertical laser scans and the camera images. In order to obtain an airborne model containing the roof and terrain shape complementary to the facade model, a Digital Surface Model is created from airborne laser scans, then triangulated, and finally texturemapped with aerial imagery. Finally, the facade model and the airborne model are fused to one single model usable for both walk- and fly-thrus. The developed algorithms are evaluated on a large data set acquired in downtown Berkeley, and the results are shown and discussed

    Holistic Multi-View Building Analysis in the Wild with Projection Pooling

    Get PDF
    We address six different classification tasks related to fine-grained building attributes: construction type, number of floors, pitch and geometry of the roof, facade material, and occupancy class. Tackling such a remote building analysis problem became possible only recently due to growing large-scale datasets of urban scenes. To this end, we introduce a new benchmarking dataset, consisting of 49426 images (top-view and street-view) of 9674 buildings. These photos are further assembled, together with the geometric metadata. The dataset showcases various real-world challenges, such as occlusions, blur, partially visible objects, and a broad spectrum of buildings. We propose a new \emph{projection pooling layer}, creating a unified, top-view representation of the top-view and the side views in a high-dimensional space. It allows us to utilize the building and imagery metadata seamlessly. Introducing this layer improves classification accuracy -- compared to highly tuned baseline models -- indicating its suitability for building analysis

    Ash Tree Identification Based on the Integration of Hyperspectral Imagery and High-density Lidar Data

    Get PDF
    Monitoring and management of ash trees has become particularly important in recent years due to the heightened risk of attack from the invasive pest, the emerald ash borer (EAB). However, distinguishing ash from other deciduous trees can be challenging. Both hyperspectral imagery and Light detection and ranging (LiDAR) data are two valuable data sources that are often used for tree species classification. Hyperspectral imagery measures detailed spectral reflectance related to the biochemical properties of vegetation, while LiDAR data measures the three-dimensional structure of tree crowns related to morphological characteristics. Thus, the accuracy of vegetation classification may be improved by combining both techniques. Therefore, the objective of this research is to integrate hyperspectral imagery and LiDAR data for improving ash tree identification. Specifically, the research aims include: 1) using LiDAR data for individual tree crowns segmentation; 2) using hyperspectral imagery for extraction of relative pure crown spectra; 3) fusing hyperspectral and LiDAR data for ash tree identification. It is expected that the classification accuracy of ash trees will be significantly improved with the integration of hyperspectral and LiDAR techniques. Analysis results suggest that, first, 3D crown structures of individual trees can be reconstructed using a set of generalized geometric models which optimally matched LiDAR-derived raster image, and crown widths can be further estimated using tree height and shape-related parameters as independent variables and ground measurement of crown widths as dependent variables. Second, with constrained linear spectral mixture analysis method, the fractions of all materials within a pixel can be extracted, and relative pure crown-scale spectra can be further calculated using illuminated-leaf fraction as weighting factors for tree species classification. Third, both crown shape index (SI) and coefficient of variation (CV) can be extracted from LiDAR data as invariant variables in tree’s life cycle, and improve ash tree identification by integrating with pixel-weighted crown spectra. Therefore, three major contributions of this research have been made in the field of tree species classification:1) the automatic estimation of individual tree crown width from LiDAR data by combining a generalized geometric model and a regression model, 2) the computation of relative pure crown-scale spectral reflectance using a pixel-weighting algorithm for tree species classification, 3) the fusion of shape-related structural features and pixel-weighted crown-scale spectral features for improving of ash tree identification

    A review on deep learning techniques for 3D sensed data classification

    Get PDF
    Over the past decade deep learning has driven progress in 2D image understanding. Despite these advancements, techniques for automatic 3D sensed data understanding, such as point clouds, is comparatively immature. However, with a range of important applications from indoor robotics navigation to national scale remote sensing there is a high demand for algorithms that can learn to automatically understand and classify 3D sensed data. In this paper we review the current state-of-the-art deep learning architectures for processing unstructured Euclidean data. We begin by addressing the background concepts and traditional methodologies. We review the current main approaches including; RGB-D, multi-view, volumetric and fully end-to-end architecture designs. Datasets for each category are documented and explained. Finally, we give a detailed discussion about the future of deep learning for 3D sensed data, using literature to justify the areas where future research would be most valuable.Comment: 25 pages, 9 figures. Review pape

    Automatic tree parameter extraction by a Mobile LiDAR System in an urban context

    Get PDF
    e0196004In an urban context, tree data are used in city planning, in locating hazardous trees and in environmental monitoring. This study focuses on developing an innovative methodology to automatically estimate the most relevant individual structural parameters of urban trees sampled by a Mobile LiDAR System at city level. These parameters include the Diameter at Breast Height (DBH), which was estimated by circle fitting of the points belonging to different height bins using RANSAC. In the case of non-circular trees, DBH is calculated by the maximum distance between extreme points. Tree sizes were extracted through a connectivity analysis. Crown Base Height, defined as the length until the bottom of the live crown, was calculated by voxelization techniques. For estimating Canopy Volume, procedures of mesh generation and α-shape methods were implemented. Also, tree location coordinates were obtained by means of Principal Component Analysis. The workflow has been validated on 29 trees of different species sampling a stretch of road 750 m long in Delft (The Netherlands) and tested on a larger dataset containing 58 individual trees. The validation was done against field measurements. DBH parameter had a correlation R2 value of 0.92 for the height bin of 20 cm which provided the best results. Moreover, the influence of the number of points used for DBH estimation, considering different height bins, was investigated. The assessment of the other inventory parameters yield correlation coefficients higher than 0.91. The quality of the results confirms the feasibility of the proposed methodology, providing scalability to a comprehensive analysis of urban treesS

    A Classification-Segmentation Framework for the Detection of Individual Trees in Dense MMS Point Cloud Data Acquired in Urban Areas

    Get PDF
    International audienceIn this paper, we present a novel framework for detecting individual trees in densely sampled 3D point cloud data acquired in urban areas. Given a 3D point cloud, the objective is to assign point-wise labels that are both class-aware and instance-aware, a task that is known as instance-level segmentation. To achieve this, our framework addresses two successive steps. The first step of our framework is given by the use of geometric features for a binary point-wise semantic classification with the objective of assigning semantic class labels to irregularly distributed 3D points, whereby the labels are defined as " tree points " and " other points ". The second step of our framework is given by a semantic segmentation with the objective of separating individual trees within the " tree points ". This is achieved by applying an efficient adaptation of the mean shift algorithm and a subsequent segment-based shape analysis relying on semantic rules to only retain plausible tree segments. We demonstrate the performance of our framework on a publicly available benchmark dataset, which has been acquired with a mobile mapping system in the city of Delft in the Netherlands. This dataset contains 10.13 M labeled 3D points among which 17.6% are labeled as " tree points ". The derived results clearly reveal a semantic classification of high accuracy (up to 90.77%) and an instance-level segmentation of high plausibility, while the simplicity, applicability and efficiency of the involved methods even allow applying the complete framework on a standard laptop computer with a reasonable processing time (less than 2.5 h)
    • 

    corecore