71 research outputs found

    Building Detection From Monocular VHR Images by Integrated Urban Area Knowledge

    Get PDF

    Automated Extraction of Buildings and Roads in a Graph Partitioning Framework

    Get PDF

    Creating 3D city models from satellite imagery for integrated assessment and forecasting of solar energy

    Get PDF
    Buildings are the most prominent component in the urban environment. The geometric identification of urban buildings plays an important role in a range of urban applications, including 3D representations of buildings, energy consumption analysis, sustainable development, urban planning, risk assessment, and change detection. In particular, 3D building models can provide a comprehensive assessment of surfaces exposed to solar radiation. However, the identification of the available surfaces on urban structures and the actual locations which receive a sufficient amount of sunlight to increase installed power capacity (e.g. Photovoltaic systems) are crucial considerations for solar energy supply efficiency. Although considerable research has been devoted to detecting the rooftops of buildings, less attention has been paid to creating and completing 3D models of urban buildings. Therefore, there is a need to increase our understanding of the solar energy potential of the surfaces of building envelopes so we can formulate future adaptive energy policies for improving the sustainability of cities. The goal of this thesis was to develop a new approach to automatically model existing buildings for the exploitation of solar energy potential within an urban environment. By investigating building footprints and heights based on shadow information derived from satellite images, 3D city models were generated. Footprints were detected using a two level segmentation process: (1) the iterative graph cuts approach for determining building regions and (2) the active contour method and the adjusted-geometry parameters method for modifying the edges and shapes of the extracted building footprints. Building heights were estimated based on the simulation of artificial shadow regions using identified building footprints and solar information in the image metadata at pre-defined height increments. The difference between the actual and simulated shadow regions at every height increment was computed using the Jaccard similarity coefficient. The 3D models at the first level of detail were then obtained by extruding the building footprints based on their heights by creating image voxels and using the marching cube approach. In conclusion, 3D models of buildings can be generated solely from 2D data of the buildings’attributes in any selected urban area. The approach outperforms the past attempts, and mean error is reduced by at least 21%. Qualitative evaluations of the study illustrate that it is possible to achieve 3D building models based on satellite images with a mean error of less than 5 m. This comprehensive study allows for 3D city models to be generated in the absence of elevation attributes and additional data. Experiments revealed that this novel, automated method can be useful in a number of spatial analyses and urban sustainability applications

    Few-shot Object Detection on Remote Sensing Images

    Full text link
    In this paper, we deal with the problem of object detection on remote sensing images. Previous methods have developed numerous deep CNN-based methods for object detection on remote sensing images and the report remarkable achievements in detection performance and efficiency. However, current CNN-based methods mostly require a large number of annotated samples to train deep neural networks and tend to have limited generalization abilities for unseen object categories. In this paper, we introduce a few-shot learning-based method for object detection on remote sensing images where only a few annotated samples are provided for the unseen object categories. More specifically, our model contains three main components: a meta feature extractor that learns to extract feature representations from input images, a reweighting module that learn to adaptively assign different weights for each feature representation from the support images, and a bounding box prediction module that carries out object detection on the reweighted feature maps. We build our few-shot object detection model upon YOLOv3 architecture and develop a multi-scale object detection framework. Experiments on two benchmark datasets demonstrate that with only a few annotated samples our model can still achieve a satisfying detection performance on remote sensing images and the performance of our model is significantly better than the well-established baseline models.Comment: 12pages, 7 figure

    Deep learning in remote sensing: a review

    Get PDF
    Standing at the paradigm shift towards data-intensive science, machine learning techniques are becoming increasingly important. In particular, as a major breakthrough in the field, deep learning has proven as an extremely powerful tool in many fields. Shall we embrace deep learning as the key to all? Or, should we resist a 'black-box' solution? There are controversial opinions in the remote sensing community. In this article, we analyze the challenges of using deep learning for remote sensing data analysis, review the recent advances, and provide resources to make deep learning in remote sensing ridiculously simple to start with. More importantly, we advocate remote sensing scientists to bring their expertise into deep learning, and use it as an implicit general model to tackle unprecedented large-scale influential challenges, such as climate change and urbanization.Comment: Accepted for publication IEEE Geoscience and Remote Sensing Magazin

    Robust Modular Feature-Based Terrain-Aided Visual Navigation and Mapping

    Get PDF
    The visual feature-based Terrain-Aided Navigation (TAN) system presented in this thesis addresses the problem of constraining inertial drift introduced into the location estimate of Unmanned Aerial Vehicles (UAVs) in GPS-denied environment. The presented TAN system utilises salient visual features representing semantic or human-interpretable objects (roads, forest and water boundaries) from onboard aerial imagery and associates them to a database of reference features created a-priori, through application of the same feature detection algorithms to satellite imagery. Correlation of the detected features with the reference features via a series of the robust data association steps allows a localisation solution to be achieved with a finite absolute bound precision defined by the certainty of the reference dataset. The feature-based Visual Navigation System (VNS) presented in this thesis was originally developed for a navigation application using simulated multi-year satellite image datasets. The extension of the system application into the mapping domain, in turn, has been based on the real (not simulated) flight data and imagery. In the mapping study the full potential of the system, being a versatile tool for enhancing the accuracy of the information derived from the aerial imagery has been demonstrated. Not only have the visual features, such as road networks, shorelines and water bodies, been used to obtain a position ’fix’, they have also been used in reverse for accurate mapping of vehicles detected on the roads into an inertial space with improved precision. Combined correction of the geo-coding errors and improved aircraft localisation formed a robust solution to the defense mapping application. A system of the proposed design will provide a complete independent navigation solution to an autonomous UAV and additionally give it object tracking capability
    • …
    corecore