2,467 research outputs found

    A Comprehensive Survey of Deep Learning in Remote Sensing: Theories, Tools and Challenges for the Community

    Full text link
    In recent years, deep learning (DL), a re-branding of neural networks (NNs), has risen to the top in numerous areas, namely computer vision (CV), speech recognition, natural language processing, etc. Whereas remote sensing (RS) possesses a number of unique challenges, primarily related to sensors and applications, inevitably RS draws from many of the same theories as CV; e.g., statistics, fusion, and machine learning, to name a few. This means that the RS community should be aware of, if not at the leading edge of, of advancements like DL. Herein, we provide the most comprehensive survey of state-of-the-art RS DL research. We also review recent new developments in the DL field that can be used in DL for RS. Namely, we focus on theories, tools and challenges for the RS community. Specifically, we focus on unsolved challenges and opportunities as it relates to (i) inadequate data sets, (ii) human-understandable solutions for modelling physical phenomena, (iii) Big Data, (iv) non-traditional heterogeneous data sources, (v) DL architectures and learning algorithms for spectral, spatial and temporal data, (vi) transfer learning, (vii) an improved theoretical understanding of DL systems, (viii) high barriers to entry, and (ix) training and optimizing the DL.Comment: 64 pages, 411 references. To appear in Journal of Applied Remote Sensin

    Deep learning in remote sensing: a review

    Get PDF
    Standing at the paradigm shift towards data-intensive science, machine learning techniques are becoming increasingly important. In particular, as a major breakthrough in the field, deep learning has proven as an extremely powerful tool in many fields. Shall we embrace deep learning as the key to all? Or, should we resist a 'black-box' solution? There are controversial opinions in the remote sensing community. In this article, we analyze the challenges of using deep learning for remote sensing data analysis, review the recent advances, and provide resources to make deep learning in remote sensing ridiculously simple to start with. More importantly, we advocate remote sensing scientists to bring their expertise into deep learning, and use it as an implicit general model to tackle unprecedented large-scale influential challenges, such as climate change and urbanization.Comment: Accepted for publication IEEE Geoscience and Remote Sensing Magazin

    Remote Sensing for Land Use / Land Cover Mapping in Almada

    Get PDF
    Monitoring land use and land cover is an extremely important task which, if properly carried out, can assist in decision making about urban and territorial planning, thus pro- viding an improvement in the citizens’ quality of life. In Portugal, and more specifically in the Almada municipality , the main tool used in this task is Carta de Ocupação de Solo (COS), a map which represents 83 classes of land use and land cover. Despite its useful- ness, COS has certain limitations, such as low spatial resolution, due to the minimum mapping unit of 1 hectare, and low temporal resolution, as it is developed through the analysis of orthophotos and released every 3 to 5 years. These constraints lead to a map which is not adequate to continuously track land-use and land-cover changes, especially with the increasingly fast pace of urbanization. This research work investigated the application of machine learning classification algorithms with Sentinel-1 and Sentinel-2 imagery, and derived products, to LULC map- ping in Almada. As such, maps were developed for 2018 using the two most common approaches to LULC classification: pixel-based (PBIA) and object-based (OBIA). Multiple combinations of satellite data and derived products, as well as two classifiers were tested for each approach. A comparison of two methods of collecting ground truth data, manual and semi-automatic, was also produced. The best results were obtained in the PBIA approach, using the manually collected ground truth and the Extreme Gradient Boosting (XGBoost) classifier with the combina- tion of Sentinel-1 and Sentinel-2 imagery and textural features obtained through Sentinel- 2 data. The classification model obtained a kappa score of 0.994, and produced an ac- curate LULC map, which has some limitations in separating Agriculture and Other Vegetation, but is able to identify with great precision Artificial Territories, Forests and Bare and sparsely vegetated areas.A monitorização da utilização e ocupação do solo (LULC) é uma tarefa de extrema im- portância que, sendo adequadamente realizada, pode auxiliar na tomada de decisões de ordenamento do território, providenciando assim uma melhoria na qualidade de vida dos cidadãos. Em Portugal, e mais especificamente no concelho de Almada, a principal ferramenta utilizada nesta tarefa é a Carta de Uso e Ocupação do Solo (COS), um mapa que divide o solo em 83 classes. Embora notavelmente útil, a COS possui determinadas limitações, entre as quais baixa resolução espacial, devido á unidade mínima cartográfica de 1 hectare, e baixa resolução espacial, sendo desenvolvida através da análise de ortofo- tos e disponibilizada a cada 3 a 5 anos. Estas limitações levam a que este mapa não seja adequado para a monitorização contínua de alterações ao nível da utilização e ocupação do solo, especialmente com o ritmo cada vez mais acelerado do crescimento urbano. Este trabalho de investigação estudou a aplicação de algoritmos de classificação de machine learning com imagens de Sentinel-1 e Sentinel-2 e produtos derivados, para a cartografia de uso e ocupação de solo em Almada. Assim, foram desenvolvidos mapas para o ano 2018 explorando duas metodologias frequentemente utilizadas em problemas de classificação de uso e ocupação do solo: baseada em píxeis (PBIA) e baseada em objetos (OBIA). Para cada abordagem foram testadas várias combinações de imagens de satélite e produtos derivados, assim como dois classificadores automáticos. Foi também produzida uma comparação entre dois tipos de ground truth: obtida manualmente, e de uma forma semi-automática. Os melhores resultados foram obtidos na abordagem baseada em pixeis, utilizando a ground truth manual e o classificador Extreme Gradient Boosting (XGBoost) com a combinação de imagens de Sentinel-1, Sentinel-2 e atributos de textura calculados através de imagens de Sentinel-2. Este modelo de classificação obteve um coeficiente kappa de 0.994 e produziu um mapa de uso e ocupação do solo com boa precisão e que, embora tenha algumas limitações ao nível de separação das classes 2. Agricultura e 3. Outra vegetação, identifica com exatidão as classes Territórios Artificializados, Florestas e Espaços descobertos ou com pouca vegetação

    Green Infrastructure Mapping in Urban Areas Using Sentinel-1 Imagery

    Get PDF
    High temporal resolution of synthetic aperture radar (SAR) imagery (e.g., Sentinel-1 (S1) imagery) creates new possibilities for monitoring green vegetation in urban areas and generating land-cover classification (LCC) maps. This research evaluates how different pre-processing steps of SAR imagery affect classification accuracy. Machine learning (ML) methods were applied in three different study areas: random forest (RF), support vector machine (SVM), and extreme gradient boosting (XGB). Since the presence of the speckle noise in radar imagery is inevitable, different adaptive filters were examined. Using the backscattering values of the S1 imagery, the SVM classifier achieved a mean overall accuracy (OA) of 63.14%, and a Kappa coefficient (Kappa) of 0.50. Using the SVM classifier with a Lee filter with a window size of 5×5 (Lee5) for speckle reduction, mean values of 73.86% and 0.64 for OA and Kappa were achieved, respectively. An additional increase in the LCC was obtained with texture features calculated from a grey-level co-occurrence matrix (GLCM). The highest classification accuracy obtained for the extracted GLCM texture features using the SVM classifier, and Lee5 filter was 78.32% and 0.69 for the mean OA and Kappa values, respectively. This study improved LCC with an evaluation of various radiometric and texture features and confirmed the ability to apply an SVM classifier. For the supervised classification, the SVM method outperformed the RF and XGB methods, although the highest computational time was needed for the SVM, whereas XGB performed the fastest. These results suggest pre-processing steps of the SAR imagery for green infrastructure mapping in urban areas. Future research should address the use of multitemporal SAR data along with the pre-processing steps and ML algorithms described in this research

    Learning Aerial Image Segmentation from Online Maps

    Get PDF
    This study deals with semantic segmentation of high-resolution (aerial) images where a semantic class label is assigned to each pixel via supervised classification as a basis for automatic map generation. Recently, deep convolutional neural networks (CNNs) have shown impressive performance and have quickly become the de-facto standard for semantic segmentation, with the added benefit that task-specific feature design is no longer necessary. However, a major downside of deep learning methods is that they are extremely data-hungry, thus aggravating the perennial bottleneck of supervised classification, to obtain enough annotated training data. On the other hand, it has been observed that they are rather robust against noise in the training labels. This opens up the intriguing possibility to avoid annotating huge amounts of training data, and instead train the classifier from existing legacy data or crowd-sourced maps which can exhibit high levels of noise. The question addressed in this paper is: can training with large-scale, publicly available labels replace a substantial part of the manual labeling effort and still achieve sufficient performance? Such data will inevitably contain a significant portion of errors, but in return virtually unlimited quantities of it are available in larger parts of the world. We adapt a state-of-the-art CNN architecture for semantic segmentation of buildings and roads in aerial images, and compare its performance when using different training data sets, ranging from manually labeled, pixel-accurate ground truth of the same city to automatic training data derived from OpenStreetMap data from distant locations. We report our results that indicate that satisfying performance can be obtained with significantly less manual annotation effort, by exploiting noisy large-scale training data.Comment: Published in IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSIN

    A Proof-of-Concept of Integrating Machine Learning, Remote Sensing, and Survey Data in Evaluations: The Measurement of Disaster Resilience in the Philippines

    Get PDF
    Disaster resilience is a topic of increasing importance for policy makers in the context of climate change. However, measuring disaster resilience remains a challenge as it requires information on both the physical environment and socio-economic dimensions. In this study we developed and tested a method to use remote sensing (RS) data to construct proxy indicators of socio-economic change. We employed machine-learning algorithms to generate land-cover and land-use classifications from very high-resolution satellite imagery to appraise disaster damage and recovery processes in the Philippines following the devastation of typhoon Haiyan in November 2013. We constructed RS-based proxy indicators for N=20 barangays (villages) in the region surrounding Tacloban City in the central east of the Philippines. We then combined the RS-based proxy indicators with detailed socio-economic information collected during a rigorous-impact evaluation by DEval in 2016. Results from a statistical analysis demonstrated that fastest post-disaster recovery occurred in urban barangays that received sufficient government support (subsidies), and which had no prior disaster experience. In general, socio-demographic factors had stronger effects on the early recovery phase (0-2 years) compared to the late recovery phase (2-3 years). German development support was related to recovery performance only to some extent. Rather than providing an in-depth statistical analysis, this study is intended as a proof-of-concept. We have been able to demonstrate that high-resolution RS data and machine-learning techniques can be used within a mixed-methods design as an effective tool to evaluate disaster impacts and recovery processes. While RS data have distinct limitations (e.g., cost, labour intensity), they offer unique opportunities to objectively measure physical, and by extension socio-economic, changes over large areas and long time-scales.Zunehmende Wetterextreme und Naturkatastrophen sind Folgen des Klimawandels. Aufgrund dieser steigenden Risiken rückt die Resilienz der Bevölkerung im Katastrophenfall als zentrales Thema in den Vordergrund und hat zunehmende Bedeutung für politische Entscheidungstragende. Dennoch bleibt die Messung des mehrdimensionalen Konzepts der Katastrophenresilienz eine Herausforderung, da sie Informationen sowohl über die physische Umgebung als auch sozioökonomische Faktoren erfordert. In dieser Studie wird eine Methode entwickelt, um aus Fernerkundungsdaten (RS-Daten) Indikatoren zu entwickeln, die Aspekte des sozioökonomischen Wandels approximieren und somit messbar machen (Proxy-Indikatoren). Zu diesem Zweck wurden Algorithmen des maschinellen Lernens eingesetzt. Mit Hilfe dieser Algorithmen wurden aus hochauflösenden Satellitenbildern Klassifizierungen für Landstruktur und Landnutzung konstruiert, um Katastrophenschäden und iederaufbauprozesse auf den Philippinen nach der Zerstörung durch den Taifun Haiyan im November 2013 zu messen. Aus den RS-Daten wurden die Indikatoren für N=20 Barangays (Dörfer) in der Region um die Stadt Tacloban im zentralen Osten der Philippinen berechnet. Diese auf RS-Daten basierenden Indikatoren wurden mit detaillierten sozioökonomischen Informationen kombiniert, die für eine DEval-Evaluierung im Jahr 2016 erhoben wurden. Die Ergebnisse der statistischen Analyse zeigen, dass der schnellste Wiederaufbau nach der Katastrophe in städtischen Barangays zu beobachten war, die ausreichend staatliche Unterstützung (Subventionen) erhielten und über keine Katastrophenerfahrung verfügten. Im Vergleich hatten soziodemografische Faktoren allgemein stärkere Auswirkungen auf die frühe (0-2 Jahre) als auf die spätere (2-3 Jahre) Wiederaufbauphase. Es konnte nur ein bedingter Bezug zwischen der deutschen Entwicklungszusammenarbeit und den Wiederaufbauerfolgen festgestellt werden. Diese Studie versteht sich als Nachweis der Machbarkeit, weniger als detaillierte statistische Analyse. Sie belegt, dass hochauflösende RS-Daten und Techniken des maschinellen Lernens innerhalb eines integrierten Methodendesigns als effektives Werkzeug zur Bewertung von Katastrophenauswirkungen und Wiederherstellungsprozessen eingesetzt werden können. Trotz spezifischer Einschränkungen (hohe Kosten, Arbeitsintensität etc.) bieten RS-Daten einzigartige Möglichkeiten sowohl Umweltbedingungen als auch sozioökonomische Veränderungen über große Gebiete und lange Zeiträume hinweg objektiv messen zu können

    Use of multiple LIDAR-derived digital terrain indices and machine learning for high-resolution national-scale soil moisture mapping of the Swedish forest landscape

    Get PDF
    Spatially extensive high-resolution soil moisture mapping is valuable in practical forestry and land management, but challenging. Here we present a novel technique involving use of LIDAR-derived terrain indices and machine learning (ML) algorithms capable of accurately modeling soil moisture at 2 m spatial resolution across the entire Swedish forest landscape. We used field data from about 20,000 sites across Sweden to train and evaluate multiple ML models. The predictor features (variables) included a suite of terrain indices generated from a national LIDAR digital elevation model and ancillary environmental features, including surficial geology, climate and land use, enabling adjustment of soil moisture class maps to regional or local conditions. Extreme gradient boosting (XGBoost) provided better performance for a 2-class model, manifested by Cohen's Kappa and Matthews Correlation Coefficient (MCC) values of 0.69 and 0.68, respectively, than the other tested ML methods: Artificial Neural Network, Random Forest, Support Vector Machine, and Naive Bayes classification. The depth to water index, topographic wetness index, and `wetland' categorization derived from Swedish property maps were the most important predictors for all models. The presented technique enabled generation of a 3-class model with Cohen's Kappa and MCC values of 0.58. In addition to the classified moisture maps, we investigated the technique's potential for producing continuous soil moisture maps. We argue that the probability of a pixel being classified as wet from a 2-class model can be used as a 0-100% index (dry to wet) of soil moisture, and the resulting maps could provide more valuable information for practical forest management than classified maps
    corecore