76 research outputs found

    Buildings Detection in VHR SAR Images Using Fully Convolution Neural Networks

    Get PDF
    This paper addresses the highly challenging problem of automatically detecting man-made structures especially buildings in very high resolution (VHR) synthetic aperture radar (SAR) images. In this context, the paper has two major contributions: Firstly, it presents a novel and generic workflow that initially classifies the spaceborne TomoSAR point clouds − - generated by processing VHR SAR image stacks using advanced interferometric techniques known as SAR tomography (TomoSAR) − - into buildings and non-buildings with the aid of auxiliary information (i.e., either using openly available 2-D building footprints or adopting an optical image classification scheme) and later back project the extracted building points onto the SAR imaging coordinates to produce automatic large-scale benchmark labelled (buildings/non-buildings) SAR datasets. Secondly, these labelled datasets (i.e., building masks) have been utilized to construct and train the state-of-the-art deep Fully Convolution Neural Networks with an additional Conditional Random Field represented as a Recurrent Neural Network to detect building regions in a single VHR SAR image. Such a cascaded formation has been successfully employed in computer vision and remote sensing fields for optical image classification but, to our knowledge, has not been applied to SAR images. The results of the building detection are illustrated and validated over a TerraSAR-X VHR spotlight SAR image covering approximately 39 km2 ^2 − - almost the whole city of Berlin − - with mean pixel accuracies of around 93.84%Comment: Accepted publication in IEEE TGR

    Semi-Supervised Learning from Street-View Images and OpenStreetMap for Automatic Building Height Estimation

    Get PDF
    Accurate building height estimation is key to the automatic derivation of 3D city models from emerging big geospatial data, including Volunteered Geographical Information (VGI). However, an automatic solution for large-scale building height estimation based on low-cost VGI data is currently missing. The fast development of VGI data platforms, especially OpenStreetMap (OSM) and crowdsourced street-view images (SVI), offers a stimulating opportunity to fill this research gap. In this work, we propose a semi-supervised learning (SSL) method of automatically estimating building height from Mapillary SVI and OSM data to generate low-cost and open-source 3D city modeling in LoD1. The proposed method consists of three parts: first, we propose an SSL schema with the option of setting a different ratio of "pseudo label" during the supervised regression; second, we extract multi-level morphometric features from OSM data (i.e., buildings and streets) for the purposed of inferring building height; last, we design a building floor estimation workflow with a pre-trained facade object detection network to generate "pseudo label" from SVI and assign it to the corresponding OSM building footprint. In a case study, we validate the proposed SSL method in the city of Heidelberg, Germany and evaluate the model performance against the reference data of building heights. Based on three different regression models, namely Random Forest (RF), Support Vector Machine (SVM), and Convolutional Neural Network (CNN), the SSL method leads to a clear performance boosting in estimating building heights with a Mean Absolute Error (MAE) around 2.1 meters, which is competitive to state-of-the-art approaches. The preliminary result is promising and motivates our future work in scaling up the proposed method based on low-cost VGI data, with possibilities in even regions and areas with diverse data quality and availability

    Advances for Urban Planning in Highly Dynamic Environments through very High-Resolution Remote Sensing Approaches

    Get PDF
    Die fortschreitende Urbanisierung und der Klimawandel stellen StĂ€dte und Stadtplanung vor große Herausforderungen. Der Lebensraum fĂŒr die Bewohner und die Infrastruktur mĂŒssen entsprechend den Klimaschutzanforderungen angepasst werden, zudem muss die Resilienz urbaner RĂ€ume gegenĂŒber Klimawandelwandelfolgen erhöht werden. Ziel der urbanen Planung und urbanen Infrastrukturplanung ist vor diesem Hintergrund im Auftrag der Gesellschaft Lösungen zu finden, um diesen Anforderungen der Zukunft gerecht zu werden und um lebenswerte StĂ€dte mit allen stĂ€dtischen Funktionen zu gewĂ€hrleisten. Zudem mĂŒssen durch Planer ökonomische und ökologisch geeignete VorschlĂ€ge fĂŒr die Bereitstellung urbaner Infrastruktur gefunden werden, um GrundbedĂŒrfnisse zu erfĂŒllen und Slums zurĂŒckzudrĂ€ngen. Gute Planungspraxis erfordert dafĂŒr die Entwicklung von Planungsszenarien fĂŒr angemessene, erfolgreiche und integrierte Lösungen, wobei eine Datenbasis als Entscheidungsgrundlage dienen muss, die durch Datenkonsistenz, -QualitĂ€t und -AktualitĂ€t als Evidenz fĂŒr Szenarienentwicklung herangezogen werden kann. In dieser Dissertationsschrift wird durch drei Studien gezeigt, dass die Disziplin der Fernerkundung durch die Verwendung sehr hochaufgelöster Erdbeobachtungsdaten einen Beitrag fĂŒr erfolgreiche urbane Planung und urbane Infrastrukturplanung leisten kann, indem der Informationsgehalt bisheriger FernerkundungsansĂ€tze unter Verwendung anwendungsfreundlicher AnsĂ€tze erhöht werden kann und direkt planungsrelevante Informationen als Evidenz fĂŒr die Entscheidungsfindung bereitgestellt werden kann. In den hochdynamischen StĂ€dten Da Nang (VN) und Belmopan (BZ) konnte an dieser Thematik gearbeitet werden. Durch die Differenzierung photogrammetrisch abgeleiteter Höhenmodelle in sehr hoher Auflösung wurden in Da Nang anstatt flĂ€chenhafter Änderungen urbaner Gebiete Dynamiken innerhalb des GebĂ€udebestands bestimmt und evaluiert. Der GebĂ€udetyp kann, wie in Belmopan gezeigt, als geeignetes Mittel fĂŒr AbschĂ€tzung sozio-ökonomischer Indikatoren dienen, die in Zusammenhang mit spezifischen VerbrĂ€uchen stehen. Mit der Verwendung von Drohnendaten wurde die Bestimmung der GebĂ€udetypen verbessert und zudem der Zusammenhang zwischen GebĂ€udetyp und Stromverbrauch gezeigt, wodurch eine Photovoltaikenergie-Bilanzierung auf EinzelgebĂ€udeebene durchgefĂŒhrt werden konnte

    A Review of Landcover Classification with Very-High Resolution Remotely Sensed Optical Images—Analysis Unit, Model Scalability and Transferability

    Get PDF
    As an important application in remote sensing, landcover classification remains one of the most challenging tasks in very-high-resolution (VHR) image analysis. As the rapidly increasing number of Deep Learning (DL) based landcover methods and training strategies are claimed to be the state-of-the-art, the already fragmented technical landscape of landcover mapping methods has been further complicated. Although there exists a plethora of literature review work attempting to guide researchers in making an informed choice of landcover mapping methods, the articles either focus on the review of applications in a specific area or revolve around general deep learning models, which lack a systematic view of the ever advancing landcover mapping methods. In addition, issues related to training samples and model transferability have become more critical than ever in an era dominated by data-driven approaches, but these issues were addressed to a lesser extent in previous review articles regarding remote sensing classification. Therefore, in this paper, we present a systematic overview of existing methods by starting from learning methods and varying basic analysis units for landcover mapping tasks, to challenges and solutions on three aspects of scalability and transferability with a remote sensing classification focus including (1) sparsity and imbalance of data; (2) domain gaps across different geographical regions; and (3) multi-source and multi-view fusion. We discuss in detail each of these categorical methods and draw concluding remarks in these developments and recommend potential directions for the continued endeavor

    A Review of Landcover Classification with Very-High Resolution Remotely Sensed Optical Images—Analysis Unit, Model Scalability and Transferability

    Get PDF
    As an important application in remote sensing, landcover classification remains one of the most challenging tasks in very-high-resolution (VHR) image analysis. As the rapidly increasing number of Deep Learning (DL) based landcover methods and training strategies are claimed to be the state-of-the-art, the already fragmented technical landscape of landcover mapping methods has been further complicated. Although there exists a plethora of literature review work attempting to guide researchers in making an informed choice of landcover mapping methods, the articles either focus on the review of applications in a specific area or revolve around general deep learning models, which lack a systematic view of the ever advancing landcover mapping methods. In addition, issues related to training samples and model transferability have become more critical than ever in an era dominated by data-driven approaches, but these issues were addressed to a lesser extent in previous review articles regarding remote sensing classification. Therefore, in this paper, we present a systematic overview of existing methods by starting from learning methods and varying basic analysis units for landcover mapping tasks, to challenges and solutions on three aspects of scalability and transferability with a remote sensing classification focus including (1) sparsity and imbalance of data; (2) domain gaps across different geographical regions; and (3) multi-source and multi-view fusion. We discuss in detail each of these categorical methods and draw concluding remarks in these developments and recommend potential directions for the continued endeavor

    Uncertainties of Human Perception in Visual Image Interpretation in Complex Urban Environments

    Get PDF
    Today satellite images are mostly exploited automatically due to advances in image classification methods. Manual visual image interpretation (MVII), however, still plays a significant role e.g., to generate training data for machine-learning algorithms or for validation purposes. In certain urban environments, however, of e.g., highest densities and structural complexity, textural and spectral complications in overlapping roof-structures still demand the human interpreter if one aims to capture individual building structures. The cognitive perception and real-world experience are still inevitable. Against these backgrounds, this article aims at quantifying and interpreting the uncertainties of mapping rooftop footprints of such areas. We focus on the agreement among interpreters and which aspects of perception and elements of image interpretation affect mapping. Ten test persons digitized six complex built-up areas. Hereby, we receive quantitative information about spatial variables of buildings to systematically check the consistency and congruence of results. An additional questionnaire reveals qualitative information about obstacles. Generally, we find large differences among interpreters’ mapping results and a high consistency of results for the same interpreter. We measure rising deviations correlate with a rising morphologic complexity. High degrees of individuality are expressed e.g., in time consumption, insitu-or geographic information system (GIS)-precognition whereas data source mostly influences the mapping procedure. By this study, we aim to fill a gap as prior research using MVII often does not implement an uncertainty analysis or quantify mapping aberrations. We conclude that remote sensing studies should not only rely unquestioned on MVII for validation; furthermore, data and methods are needed to suspend uncertainty

    Review on Active and Passive Remote Sensing Techniques for Road Extraction

    Get PDF
    Digital maps of road networks are a vital part of digital cities and intelligent transportation. In this paper, we provide a comprehensive review on road extraction based on various remote sensing data sources, including high-resolution images, hyperspectral images, synthetic aperture radar images, and light detection and ranging. This review is divided into three parts. Part 1 provides an overview of the existing data acquisition techniques for road extraction, including data acquisition methods, typical sensors, application status, and prospects. Part 2 underlines the main road extraction methods based on four data sources. In this section, road extraction methods based on different data sources are described and analysed in detail. Part 3 presents the combined application of multisource data for road extraction. Evidently, different data acquisition techniques have unique advantages, and the combination of multiple sources can improve the accuracy of road extraction. The main aim of this review is to provide a comprehensive reference for research on existing road extraction technologies.Peer reviewe

    Building Detection from Very High Resolution Remotely Sensed Imagery Using Deep Neural Networks

    Get PDF
    The past decades have witnessed a significant change in human societies with a fast pace and rapid urbanization. The boom of urbanization is contributed by the influx of people to the urban area and comes with building construction and deconstruction. The estimation of both residential and industrial buildings is important to reveal and demonstrate the human activities of the regions. As a result, it is essential to effectively and accurately detect the buildings in urban areas for urban planning and population monitoring. The automatic building detection method in remote sensing has always been a challenging task, because small targets cannot be identified in images with low resolution, as well as the complexity in the various scales, structure, and colours of urban buildings. However, the development of techniques improves the performance of the building detection task, by taking advantage of the accessibility of very high-resolution (VHR) remotely sensed images and the innovation of object detection methods. The purpose of this study is to develop a framework for the automatic detection of urban buildings from the VHR remotely sensed imagery at a large scale by using the state-of-art deep learning network. The thesis addresses the research gaps and difficulties as well as the achievements in building detection. The conventional hand-crafted methods, machine learning methods, and deep learning methods are reviewed and discussed. The proposed method employs a deep convolutional neural network (CNN) for building detection. Two input datasets with different spatial resolutions were used to train and validate the CNN model, and a testing dataset was used to evaluate the performance of the proposed building detection method. The experiment result indicates that the proposed method performs well at both building detection and outline segmentation task with a total precision of 0.92, a recall of 0.866, an F1-score of 0.891. In conclusion, this study proves the feasibility of CNN on solving building detection challenges using VHR remotely sensed imagery
    • 

    corecore