2 research outputs found

    Building polygon extraction from aerial images and digital surface models with a frame field learning framework

    Get PDF
    Deep learning-based models for building delineation from remotely sensed images face the challenge of producing precise and regular building outlines. This study investigates the combination of normalized digital surface models (nDSMs) with aerial images to optimize the extraction of building polygons using the frame field learning method. Results are evaluated at pixel, object, and polygon levels. In addition, an analysis is performed to assess the statistical deviations in the number of vertices of building polygons compared with the reference. The comparison of the number of vertices focuses on finding the output polygons that are the easiest to edit by human analysts in operational applications. It can serve as guidance to reduce the post-processing workload for obtaining high-accuracy building footprints. Experiments conducted in Enschede, the Netherlands, demonstrate that by introducing nDSM, the method could reduce the number of false positives and prevent missing the real buildings on the ground. The positional accuracy and shape similarity was improved, resulting in better-aligned building polygons. The method achieved a mean intersection over union (IoU) of 0.80 with the fused data (RGB + nDSM) against an IoU of 0.57 with the baseline (using RGB only) in the same area. A qualitative analysis of the results shows that the investigated model predicts more precise and regular polygons for large and complex structures

    Investigating Sar-Optical Deep Learning Data Fusion to Map the Brazilian Cerrado Vegetation with Sentinel Data

    Get PDF
    Despite its environmental and societal importance, accurately mapping the Brazilian Cerrado's vegetation is still an open challenge. Its diverse but spectrally similar physiognomies are difficult to be identified and mapped by state-of-the-art methods from only medium-to high-resolution optical images. This work investigates the fusion of Synthetic Aperture Radar (SAR) and optical data in convolutional neural network architectures to map the Cerrado according to a 2-level class hierarchy. Additionally, the proposed model is designed to deal with uncertainties that are brought by the difference in resolution between the input images (at 10m) and the reference data (at 30m). We tested four data fusion strategies and showed that the position for the data combination is important for the network to learn better features.</p
    corecore