10 research outputs found

    Segmenting Roads from Aerial Images: A Deep Learning Approach Using Multi-Scale Analysis

    Get PDF
    Road map generation requires frequent map updates due to the irregular infrastructural changes. Updating a manual road map is a lengthy process, whereas using aerial or remote sensing (RS) requires less time for the update. However, road extraction becomes more complex due to the similar texture appearance of building top roofs, shadows, and occlusion due to trees. The occluded roads appear as discontinuous road patch in segmented image of updated maps. In this paper, we propose a deep learning method that uses multi-scale analysis for road feature extraction. The dilated inception module (DI) in the up and down sampling paths of network extracts the local and global texture patterns of the road. Furthermore, we also utilize the pyramid pooling module (PP) which has average and max pooling to study the global contextual information under the shadow regions. In the proposed architecture, first, the road in the aerial images is segmented along with the tiny non-road segments. Next, the post processing, which exploits the geometrical shape features, is utilized for filtering the tiny non-road noises. The performance of proposed network is validated on using the publicly available Massachusetts road data by comparing with the other models available in literature

    Road Extraction from High-Resolution Orthophoto Images Using Convolutional Neural Network

    Full text link
    © 2020, Indian Society of Remote Sensing. Abstract: Two of the major applications in geospatial information system (GIS) and remote sensing fields are object detection and man-made feature extraction (e.g., road sections) from high-resolution remote sensing imagery. Extracting roads from high-resolution remotely sensed imagery plays a crucial role in multiple applications, such as navigation, emergency tasks, land cover change detection, and updating GIS maps. This study presents a deep learning technique based on a convolutional neural network (CNN) to classify and extract roads from orthophoto images. We applied the model on five orthophoto images to specify the superiority of the method for road extraction. First, we used principal component analysis and object-based image analysis for pre-processing to not only obtain spectral information but also add spatial and textural information for enhancing the classification accuracy. Then, the obtained results from the previous step were used as input for the CNN model to classify the images into road and non-road parts and trivial opening and closing operation are applied to extract connected road components from the images and remove holes inside the road parts. For the accuracy assessment of the proposed method, we used measurement factors such as precision, recall, F1 score, overall accuracy, and IOU. Achieved results showed that the average percentages of these factors were 91.09%, 95.32%, 93.15%, 94.44%, and 87.21%. The results were also compared with those of other existing methods. The comparison ascertained the reliability and superior performance of the suggested model architecture for extracting road regions from orthophoto images. Graphic Abstract: [Figure not available: see fulltext.

    Review on Active and Passive Remote Sensing Techniques for Road Extraction

    Get PDF
    Digital maps of road networks are a vital part of digital cities and intelligent transportation. In this paper, we provide a comprehensive review on road extraction based on various remote sensing data sources, including high-resolution images, hyperspectral images, synthetic aperture radar images, and light detection and ranging. This review is divided into three parts. Part 1 provides an overview of the existing data acquisition techniques for road extraction, including data acquisition methods, typical sensors, application status, and prospects. Part 2 underlines the main road extraction methods based on four data sources. In this section, road extraction methods based on different data sources are described and analysed in detail. Part 3 presents the combined application of multisource data for road extraction. Evidently, different data acquisition techniques have unique advantages, and the combination of multiple sources can improve the accuracy of road extraction. The main aim of this review is to provide a comprehensive reference for research on existing road extraction technologies.Peer reviewe

    Improving Road Surface Area Extraction via Semantic Segmentation with Conditional Generative Learning for Deep Inpainting Operations

    Get PDF
    The road surface area extraction task is generally carried out via semantic segmentation over remotely-sensed imagery. However, this supervised learning task is often costly as it requires remote sensing images labelled at the pixel level, and the results are not always satisfactory (presence of discontinuities, overlooked connection points, or isolated road segments). On the other hand, unsupervised learning does not require labelled data and can be employed for post-processing the geometries of geospatial objects extracted via semantic segmentation. In this work, we implement a conditional Generative Adversarial Network to reconstruct road geometries via deep inpainting procedures on a new dataset containing unlabelled road samples from challenging areas present in official cartographic support from Spain. The goal is to improve the initial road representations obtained with semantic segmentation models via generative learning. The performance of the model was evaluated on unseen data by conducting a metrical comparison where a maximum Intersection over Union (IoU) score improvement of 1.3% was observed when compared to the initial semantic segmentation result. Next, we evaluated the appropriateness of applying unsupervised generative learning using a qualitative perceptual validation to identify the strengths and weaknesses of the proposed method in very complex scenarios and gain a better intuition of the model’s behaviour when performing large-scale post-processing with generative learning and deep inpainting procedures and observed important improvements in the generated data

    Deep Learning Approaches Applied to Remote Sensing Datasets for Road Extraction: A State-Of-The-Art Review

    Full text link
    One of the most challenging research subjects in remote sensing is feature extraction, such as road features, from remote sensing images. Such an extraction influences multiple scenes, including map updating, traffic management, emergency tasks, road monitoring, and others. Therefore, a systematic review of deep learning techniques applied to common remote sensing benchmarks for road extraction is conducted in this study. The research is conducted based on four main types of deep learning methods, namely, the GANs model, deconvolutional networks, FCNs, and patch-based CNNs models. We also compare these various deep learning models applied to remote sensing datasets to show which method performs well in extracting road parts from high-resolution remote sensing images. Moreover, we describe future research directions and research gaps. Results indicate that the largest reported performance record is related to the deconvolutional nets applied to remote sensing images, and the F1 score metric of the generative adversarial network model, DenseNet method, and FCN-32 applied to UAV and Google Earth images are high: 96.08%, 95.72%, and 94.59%, respectively.</jats:p

    Satellite and UAV Platforms, Remote Sensing for Geographic Information Systems

    Get PDF
    The present book contains ten articles illustrating the different possible uses of UAVs and satellite remotely sensed data integration in Geographical Information Systems to model and predict changes in both the natural and the human environment. It illustrates the powerful instruments given by modern geo-statistical methods, modeling, and visualization techniques. These methods are applied to Arctic, tropical and mid-latitude environments, agriculture, forest, wetlands, and aquatic environments, as well as further engineering-related problems. The present Special Issue gives a balanced view of the present state of the field of geoinformatics

    Deep learning for land cover and land use classification

    Get PDF
    Recent advances in sensor technologies have witnessed a vast amount of very fine spatial resolution (VFSR) remotely sensed imagery being collected on a daily basis. These VFSR images present fine spatial details that are spectrally and spatially complicated, thus posing huge challenges in automatic land cover (LC) and land use (LU) classification. Deep learning reignited the pursuit of artificial intelligence towards a general purpose machine to be able to perform any human-related tasks in an automated fashion. This is largely driven by the wave of excitement in deep machine learning to model the high-level abstractions through hierarchical feature representations without human-designed features or rules, which demonstrates great potential in identifying and characterising LC and LU patterns from VFSR imagery. In this thesis, a set of novel deep learning methods are developed for LC and LU image classification based on the deep convolutional neural networks (CNN) as an example. Several difficulties, however, are encountered when trying to apply the standard pixel-wise CNN for LC and LU classification using VFSR images, including geometric distortions, boundary uncertainties and huge computational redundancy. These technical challenges for LC classification were solved either using rule-based decision fusion or through uncertainty modelling using rough set theory. For land use, an object-based CNN method was proposed, in which each segmented object (a group of homogeneous pixels) was sampled and predicted by CNN with both within-object and between-object information. LU was, thus, classified with high accuracy and efficiency. Both LC and LU formulate a hierarchical ontology at the same geographical space, and such representations are modelled by their joint distribution, in which LC and LU are classified simultaneously through iteration. These developed deep learning techniques achieved by far the highest classification accuracy for both LC and LU, up to around 90% accuracy, about 5% higher than the existing deep learning methods, and 10% greater than traditional pixel-based and object-based approaches. This research made a significant contribution in LC and LU classification through deep learning based innovations, and has great potential utility in a wide range of geospatial applications
    corecore