7,216 research outputs found

    Detecting Urban Road Changes using Segmentation and Vector Analysis

    Get PDF
    The rapid growth of urbanization is driving increased road infrastructure development. Detecting and monitoring changes in urban road areas is challenging for city planners. This research proposes using semantic segmentation and vector analysis on high-resolution images to identify road network changes. The U-Net model performs semantic segmentation, pre-trained on a Massachusetts road dataset, predicting labels for a specific area with temporal data and co-registration to reduce distortions. Predicted labels are converted to shapefiles for vector analysis. Satellite images from Google Earth archives demonstrate the change detection process. The outcome of this predictive phase was the transformation of projected labels into shapefiles, thereby facilitating vector analysis to pinpoint and characterize alterations

    A review of technical factors to consider when designing neural networks for semantic segmentation of Earth Observation imagery

    Full text link
    Semantic segmentation (classification) of Earth Observation imagery is a crucial task in remote sensing. This paper presents a comprehensive review of technical factors to consider when designing neural networks for this purpose. The review focuses on Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Generative Adversarial Networks (GANs), and transformer models, discussing prominent design patterns for these ANN families and their implications for semantic segmentation. Common pre-processing techniques for ensuring optimal data preparation are also covered. These include methods for image normalization and chipping, as well as strategies for addressing data imbalance in training samples, and techniques for overcoming limited data, including augmentation techniques, transfer learning, and domain adaptation. By encompassing both the technical aspects of neural network design and the data-related considerations, this review provides researchers and practitioners with a comprehensive and up-to-date understanding of the factors involved in designing effective neural networks for semantic segmentation of Earth Observation imagery.Comment: 145 pages with 32 figure

    Multi-source hierarchical conditional random field model for feature fusion of remote sensing images and LiDAR data

    Get PDF
    Feature fusion of remote sensing images and LiDAR points cloud data, which have strong complementarity, can effectively play the advantages of multi-class features to provide more reliable information support for the remote sensing applications, such as object classification and recognition. In this paper, we introduce a novel multi-source hierarchical conditional random field (MSHCRF) model to fuse features extracted from remote sensing images and LiDAR data for image classification. Firstly, typical features are selected to obtain the interest regions from multi-source data, then MSHCRF model is constructed to exploit up the features, category compatibility of images and the category consistency of multi-source data based on the regions, and the outputs of the model represents the optimal results of the image classification. Competitive results demonstrate the precision and robustness of the proposed method

    Building Footprint Extraction in Dense Areas using Super Resolution and Frame Field Learning

    Full text link
    Despite notable results on standard aerial datasets, current state-of-the-arts fail to produce accurate building footprints in dense areas due to challenging properties posed by these areas and limited data availability. In this paper, we propose a framework to address such issues in polygonal building extraction. First, super resolution is employed to enhance the spatial resolution of aerial image, allowing for finer details to be captured. This enhanced imagery serves as input to a multitask learning module, which consists of a segmentation head and a frame field learning head to effectively handle the irregular building structures. Our model is supervised by adaptive loss weighting, enabling extraction of sharp edges and fine-grained polygons which is difficult due to overlapping buildings and low data quality. Extensive experiments on a slum area in India that mimics a dense area demonstrate that our proposed approach significantly outperforms the current state-of-the-art methods by a large margin.Comment: Accepted at The 12th International Conference on Awareness Science and Technolog

    Road Segmentation for Remote Sensing Images using Adversarial Spatial Pyramid Networks

    Full text link
    Road extraction in remote sensing images is of great importance for a wide range of applications. Because of the complex background, and high density, most of the existing methods fail to accurately extract a road network that appears correct and complete. Moreover, they suffer from either insufficient training data or high costs of manual annotation. To address these problems, we introduce a new model to apply structured domain adaption for synthetic image generation and road segmentation. We incorporate a feature pyramid network into generative adversarial networks to minimize the difference between the source and target domains. A generator is learned to produce quality synthetic images, and the discriminator attempts to distinguish them. We also propose a feature pyramid network that improves the performance of the proposed model by extracting effective features from all the layers of the network for describing different scales objects. Indeed, a novel scale-wise architecture is introduced to learn from the multi-level feature maps and improve the semantics of the features. For optimization, the model is trained by a joint reconstruction loss function, which minimizes the difference between the fake images and the real ones. A wide range of experiments on three datasets prove the superior performance of the proposed approach in terms of accuracy and efficiency. In particular, our model achieves state-of-the-art 78.86 IOU on the Massachusetts dataset with 14.89M parameters and 86.78B FLOPs, with 4x fewer FLOPs but higher accuracy (+3.47% IOU) than the top performer among state-of-the-art approaches used in the evaluation

    Semi-supervised Road Updating Network (SRUNet): A Deep Learning Method for Road Updating from Remote Sensing Imagery and Historical Vector Maps

    Full text link
    A road is the skeleton of a city and is a fundamental and important geographical component. Currently, many countries have built geo-information databases and gathered large amounts of geographic data. However, with the extensive construction of infrastructure and rapid expansion of cities, automatic updating of road data is imperative to maintain the high quality of current basic geographic information. However, obtaining bi-phase images for the same area is difficult, and complex post-processing methods are required to update the existing databases.To solve these problems, we proposed a road detection method based on semi-supervised learning (SRUNet) specifically for road-updating applications; in this approach, historical road information was fused with the latest images to directly obtain the latest state of the road.Considering that the texture of a road is complex, a multi-branch network, named the Map Encoding Branch (MEB) was proposed for representation learning, where the Boundary Enhancement Module (BEM) was used to improve the accuracy of boundary prediction, and the Residual Refinement Module (RRM) was used to optimize the prediction results. Further, to fully utilize the limited amount of label information and to enhance the prediction accuracy on unlabeled images, we utilized the mean teacher framework as the basic semi-supervised learning framework and introduced Regional Contrast (ReCo) in our work to improve the model capacity for distinguishing between the characteristics of roads and background elements.We applied our method to two datasets. Our model can effectively improve the performance of a model with fewer labels. Overall, the proposed SRUNet can provide stable, up-to-date, and reliable prediction results for a wide range of road renewal tasks.Comment: 22 pages, 8 figure

    Coarse-to-fine classification of road infrastructure elements from mobile point clouds using symmetric ensemble point network and euclidean cluster extraction

    Get PDF
    Classifying point clouds obtained from mobile laser scanning of road environments is a fundamental yet challenging problem for road asset management and unmanned vehicle navigation. Deep learning networks need no prior knowledge to classify multiple objects, but often generate a certain amount of false predictions. However, traditional clustering methods often involve leveraging a priori knowledge, but may lack generalisability compared to deep learning networks. This paper presents a classification method that coarsely classifies multiple objects of road infrastructure with a symmetric ensemble point (SEP) network and then refines the results with a Euclidean cluster extraction (ECE) algorithm. The SEP network applies a symmetric function to capture relevant structural features at different scales and select optimal sub-samples using an ensemble method. The ECE subsequently adjusts points that have been predicted incorrectly by the first step. The experimental results indicate that this method effectively extracts six types of road infrastructure elements: road surfaces, buildings, walls, traffic signs, trees and streetlights. The overall accuracy of the SEP-ECE method improves by 3.97% with respect to PointNet. The achieved average classification accuracy is approximately 99.74%, which is suitable for practical use in transportation network management

    Semi-automated workflow for natural disaster assessment : a case study of Banda Aceh, Indonesia

    Get PDF
    Dissertation submitted in partial fulfilment of the requirements for the degree of Master of Science in Geospatial TechnologiesThe past decade has witnessed many natural disasters hitting highly populated areas causing billions of dollars in damage as well as many human casualties. During natural disasters, when attaining ground measurements are limited, remote sensing and geographical information systems (GIS) are useful tools for in-depth analysis of the affected area. This report will introduce a new semi-automatic workflow in which the road network will be used to break up the area into “blocks” and then zonal statistics will be applied to detect change based on the created blocks rather than the conventional methods of change detection; pixel by pixel and object oriented. This hybrid approach will take advantage of the simplicity and ease of applying pixel change detection methods on fixed objects or “blocks” to assess for damage. The change detection analysis results can then be used to map and quantify damage caused by natural disasters using pre and post Landsat imagery of the affected area. Multi-Criteria Analysis is performed on the damage map, proximity to roads, proximity to waterbodies and building size to find the most suitable locations for temporary housing sites. The image differencing of NDWI mean produced the highest overall accuracy of 71.70% among eleven bands/indices and the multi-criteria analysis successfully selected fourteen temporary housing center sites from a possible 114. When time is of essence with limited resources and GIS expertise on the field, local authorities can greatly benefit from a rapid generalized analysis that will provide a “bird-eye view” of the affected area to efficiently and effectively allocate emergency efforts within a short time frame
    • …
    corecore