16 research outputs found

    Deep Learning Approaches Applied to Remote Sensing Datasets for Road Extraction: A State-Of-The-Art Review

    Full text link
    One of the most challenging research subjects in remote sensing is feature extraction, such as road features, from remote sensing images. Such an extraction influences multiple scenes, including map updating, traffic management, emergency tasks, road monitoring, and others. Therefore, a systematic review of deep learning techniques applied to common remote sensing benchmarks for road extraction is conducted in this study. The research is conducted based on four main types of deep learning methods, namely, the GANs model, deconvolutional networks, FCNs, and patch-based CNNs models. We also compare these various deep learning models applied to remote sensing datasets to show which method performs well in extracting road parts from high-resolution remote sensing images. Moreover, we describe future research directions and research gaps. Results indicate that the largest reported performance record is related to the deconvolutional nets applied to remote sensing images, and the F1 score metric of the generative adversarial network model, DenseNet method, and FCN-32 applied to UAV and Google Earth images are high: 96.08%, 95.72%, and 94.59%, respectively.</jats:p

    Aplicações de modelos de deep learning para monitoramento ambiental e agrícola no Brasil

    Get PDF
    Tese (doutorado) — Universidade de Brasília, Instituto de Ciências Humanas, Departamento de Geografia, Programa de Pós-Graduação em Geografia, 2022.Algoritmos do novo campo de aprendizado de máquina conhecido como Deep Learning têm se popularizado recentemente, mostrando resultados superiores a modelos tradicionais em métodos de classificação e regressão. O histórico de sua utilização no campo do sensoriamento remoto ainda é breve, porém eles têm mostrado resultados similarmente superiores em processos como a classificação de uso e cobertura da terra e detecção de mudança. Esta tese teve como objetivo o desenvolvimento de metodologias utilizando estes algoritmos com um enfoque no monitoramento de alvos críticos no Brasil por via de imagens de satélite a fim de buscar modelos de alta precisão e acurácia para substituir metodologias utilizadas atualmente. Ao longo de seu desenvolvimento, foram produzidos três artigos onde foi avaliado o uso destes algoritmos para a detecção de três alvos distintos: (a) áreas queimadas no Cerrado brasileiro, (b) áreas desmatadas na região da Amazônia e (c) plantios de arroz no sul do Brasil. Apesar do objetivo similar na produção dos artigos, procurou-se distinguir suficientemente suas metodologias a fim de expandir o espaço metodológico conhecido para fornecer uma base teórica para facilitar e incentivar a adoção destes algoritmos em contexto nacional. O primeiro artigo avaliou diferentes dimensões de amostras para a classificação de áreas queimadas em imagens Landsat-8. O segundo artigo avaliou a utilização de séries temporais binárias de imagens Landsat para a detecção de novas áreas desmatadas entre os anos de 2017, 2018 e 2019. O último artigo utilizou imagens de radar Sentinel-1 (SAR) em uma série temporal contínua para a delimitação dos plantios de arroz no Rio Grande do Sul. Modelos similares foram utilizados em todos os artigos, porém certos modelos foram exclusivos a cada publicação, produzindo diferentes resultados. De maneira geral, os resultados encontrados mostram que algoritmos de Deep Learning são não só viáveis para detecção destes alvos mas também oferecem desempenho superior a métodos existentes na literatura, representando uma alternativa altamente eficiente para classificação e detecção de mudança dos alvos avaliados.Algorithms belonging to the new field of machine learning called Deep Learning have been gaining popularity recently, showing superior results when compared to traditional classification and regression methods. The history of their use in the field of remote sensing is not long, however they have been showing similarly superior results in processes such as land use classification and change detection. This thesis had as its objective the development of methodologies using these algorithms with a focus on monitoring critical targets in Brazil through satellite imagery in order to find high accuracy and precision models to substitute methods used currently. Through the development of this thesis, articles were produced evaluating their use for the detection of three distinct targets: (a) burnt areas in the Brazilian Cerrado, (b) deforested areas in the Amazon region and (c) rice fields in the south of Brazil. Despite the similar objective in the production of these articles, the methodologies in each of them was made sufficiently distinct in order to expand the methodological space known. The first article evaluated the use of differently sized samples to classify burnt areas in Landsat-8 imagery. The second article evaluated the use of binary Landsat time series to detect new deforested areas between the years of 2017, 2018 and 2019. The last article used continuous radar Sentinel-1 (SAR) time series to map rice fields in the state of Rio Grande do Sul. Similar models were used in all articles, however certain models were exclusive to each one. In general, the results show that not only are the Deep Learning models viable but also offer better results in comparison to other existing methods, representing an efficient alternative when it comes to the classification and change detection of the targets evaluated

    Interpretability of Deep Neural Networks for Image Segmentation

    Get PDF
    Od rozšíření prozkoumávání vesmíru, především v komerčním sektoru, dochází k vzinku obrovského množství satelitních snímků. Množství dat dodávaných různými satelity zvyšuje poptávku po interpretaci těchto dat za účelem získání hodnotých informací. Příkladem takových dat je dataset SpaceNet. Cílem této práce je vytvoření a vyhodnocení hluboké neuronové sítě, jakožto řešení soutěže SpaceNet Road Network Detection challenge. Kvůli komplexitě datasetu SpaceNet jsou prozkoumány a využity různé metody interpretovatelnosti pro neuronové sítě.Since the widespread of space exploration, especially in the commercial sector, there has been an enormous supply of satellite imagery. The amount of data supplied by various satellites raises demand in human interpretation of given data in order to obtain valuable information. One example of such data is the SpaceNet dataset. The aim of this work is to design and evaluate a deep neural network as a solution to the SpaceNet Road Network Detection challenge based on state-of-the-art published architectures. Due to the complex nature of the SpaceNet dataset various methods of neural network interpretability are explored and implemented

    Synthetic Aperture Radar (SAR) Meets Deep Learning

    Get PDF
    This reprint focuses on the application of the combination of synthetic aperture radars and depth learning technology. It aims to further promote the development of SAR image intelligent interpretation technology. A synthetic aperture radar (SAR) is an important active microwave imaging sensor, whose all-day and all-weather working capacity give it an important place in the remote sensing community. Since the United States launched the first SAR satellite, SAR has received much attention in the remote sensing community, e.g., in geological exploration, topographic mapping, disaster forecast, and traffic monitoring. It is valuable and meaningful, therefore, to study SAR-based remote sensing applications. In recent years, deep learning represented by convolution neural networks has promoted significant progress in the computer vision community, e.g., in face recognition, the driverless field and Internet of things (IoT). Deep learning can enable computational models with multiple processing layers to learn data representations with multiple-level abstractions. This can greatly improve the performance of various applications. This reprint provides a platform for researchers to handle the above significant challenges and present their innovative and cutting-edge research results when applying deep learning to SAR in various manuscript types, e.g., articles, letters, reviews and technical reports

    SEMANTIC IMAGE SEGMENTATION VIA A DENSE PARALLEL NETWORK

    Get PDF
    Image segmentation has been an important area of study in computer vision. Image segmentation is a challenging task, since it involves pixel-wise annotation, i.e. labeling each pixel according to the class to which it belongs. In image classification task, the goal is to predict to which class an entire image belongs. Thus, there is more focus on the abstract features extracted by Convolutional Neural Networks (CNNs), with less emphasis on the spatial information. In image segmentation task, on the other hand, the abstract information and spatial information are needed at the same time. One class of work in image segmentation focuses on ``recovering” the high-resolution features from the low resolution ones. This type of network has an encoder-decoder structure, and spatial information is recovered by feeding the decoder part of the model with previous high-resolution features through skip connections. Overall, these strategies involving skip connections try to propagate features to deeper layers. The second class of work, on the other hand, focuses on ``maintaining high resolution features throughout the process. In this thesis, we first review the related work on image segmentation and then introduce two new models, namely Unet-Laplacian and Dense Parallel Network (DensePN). The Unet-Laplacian is a series CNN model, incorporating a Laplacian filter branch. This new branch performs Laplacian filter operation on the input RGB image, and feeds the output to the decoder. Experiments results show that, the output of the Unet-Laplacian captures more of the ground truth mask, and eliminates some of the false positives. We then describe the proposed DensePN, which was designed to find a good balance between extracting features through multiple layers and keeping spatial information. DensePN allows not only keeping high-resolution feature maps but also feature reuse at deeper layers to solve the image segmentation problem. We have designed the Dense Parallel Network based on three main observations that we have gained from our initial trials and preliminary studies. First, maintaining a high resolution feature map provides good performance. Second, feature reuse is very efficient, and allows having deeper networks. Third, having a parallel structure can provide better information flow. Experimental results on the CamVid dataset show that the proposed DensePN (with 1.1M parameters) provides a better performance than FCDense56 (with 1.5M parameters) by having less parameters at the same time

    An efficient decision support system for flood inundation management using intermittent remote-sensing data

    Get PDF
    Abstract: Timely acquisition of spatial flood distribution is an essential basis for flood-disaster monitoring and management. Remote-sensing data have been widely used in water-body surveys. However, due to the cloudy weather and complex geomorphic environment, the inability to receive remote-sensing images throughout the day has resulted in some data being missing and unable to provide dynamic and continuous flood inundation process data. To fully and effectively use remote-sensing data, we developed a new decision support system for integrated flood inundation management based on limited and intermittent remote-sensing data. Firstly, we established a new multi-scale water-extraction convolutional neural network named DEU-Net to extract water from remote-sensing images automatically. A specific datasets training method was created for typical region types to separate the water body from the confusing surface features more accurately. Secondly, we built a waterfront contour active tracking model to implicitly describe the flood movement interface. In this way, the flooding process was converted into the numerical solution of the partial differential equation of the boundary function. Space upwind difference format and the time Euler difference format were used to perform the numerical solution. Finally, we established seven indicators that considered regional characteristics and flood-inundation attributes to evaluate flood-disaster losses. The cloud model using the entropy weight method was introduced to account for uncertainties in various parameters. In the end, a decision support system realizing the flood losses risk visualization was developed by using the ArcGIS application programming interface (API). To verify the effectiveness of the model constructed in this paper, we conducted numerical experiments on the model’s performance through comparative experiments based on a laboratory scale and actual scale, respectively. The results were as follows: (1) The DEU-Net method had a better capability to accurately extract various water bodies, such as urban water bodies, open-air ponds, plateau lakes etc., than the other comparison methods. (2) The simulation results of the active tracking model had good temporal and spatial consistency with the image extraction results and actual statistical data compared with the synthetic observation data. (3) The application results showed that the system has high computational efficiency and noticeable visualization effects. The research results may provide a scientific basis for the emergency-response decision-making of flood disasters, especially in data-sparse regions

    Road condition assessment from aerial imagery using deep learning

    Get PDF
    Terrestrial sensors are commonly used to inspect and document the condition of roads at regular intervals and according to defined rules. For example in Germany, extensive data and information is obtained, which is stored in the Federal Road Information System and made available in particular for deriving necessary decisions. Transverse and longitudinal evenness, for example, are recorded by vehicles using laser techniques. To detect damage to the road surface, images are captured and recorded using area or line scan cameras. All these methods provide very accurate information about the condition of the road, but are time-consuming and costly. Aerial imagery (e.g. multi- or hyperspectral, SAR) provide an additional possibility for the acquisition of the specific parameters describing the condition of roads, yet a direct transfer from objects extractable from aerial imagery to the required objects or parameters, which determine the condition of the road is difficult and in some cases impossible. In this work, we investigate the transferability of objects commonly used for the terrestrial-based assessment of road surfaces to an aerial image-based assessment. In addition, we generated a suitable dataset and developed a deep learning based image segmentation method capable of extracting two relevant road condition parameters from high-resolution multispectral aerial imagery, namely cracks and working seams. The obtained results show that our models are able to extraction these thin features from aerial images, indicating the possibility of using more automated approaches for road surface condition assessment in the future

    Pipeline Detection Using Uncertainty-Driven Machine Learning

    Get PDF
    The pipeline detection model developed in (Dasenbrock et al., 2021) has proven to be capable of generalizing well from training data originating from Great Britain to Northern Germany. The thesis at hand used a similar model but applied it to a more differing and heterogeneous region compared to the training data in order to test its generalizability: Spain. Insufficient IoU scores showed that the model is not able to satisfyingly detect pipeline pathways in Spain. It will be of great importance when applying the model to new regions that it is also specifically trained for the new regions. While the model is permanently applied to new regions and consequently more training data is added, the need for new training data will diminish with time. This is because the knowledge of the model will become broader, and the differences between new regions and the regions already shown to the model will likely decrease. To speed up this process and to train more sample efficient, the potential of an active learning approach was investigated

    Understanding cities with machine eyes: A review of deep computer vision in urban analytics

    Get PDF
    Modelling urban systems has interested planners and modellers for decades. Different models have been achieved relying on mathematics, cellular automation, complexity, and scaling. While most of these models tend to be a simplification of reality, today within the paradigm shifts of artificial intelligence across the different fields of science, the applications of computer vision show promising potential in understanding the realistic dynamics of cities. While cities are complex by nature, computer vision shows progress in tackling a variety of complex physical and non-physical visual tasks. In this article, we review the tasks and algorithms of computer vision and their applications in understanding cities. We attempt to subdivide computer vision algorithms into tasks, and cities into layers to show evidence of where computer vision is intensively applied and where further research is needed. We focus on highlighting the potential role of computer vision in understanding urban systems related to the built environment, natural environment, human interaction, transportation, and infrastructure. After showing the diversity of computer vision algorithms and applications, the challenges that remain in understanding the integration between these different layers of cities and their interactions with one another relying on deep learning and computer vision. We also show recommendations for practice and policy-making towards reaching AI-generated urban policies

    Satellite and UAV Platforms, Remote Sensing for Geographic Information Systems

    Get PDF
    The present book contains ten articles illustrating the different possible uses of UAVs and satellite remotely sensed data integration in Geographical Information Systems to model and predict changes in both the natural and the human environment. It illustrates the powerful instruments given by modern geo-statistical methods, modeling, and visualization techniques. These methods are applied to Arctic, tropical and mid-latitude environments, agriculture, forest, wetlands, and aquatic environments, as well as further engineering-related problems. The present Special Issue gives a balanced view of the present state of the field of geoinformatics
    corecore