5,178 research outputs found

    A Comprehensive Survey of Deep Learning in Remote Sensing: Theories, Tools and Challenges for the Community

    Full text link
    In recent years, deep learning (DL), a re-branding of neural networks (NNs), has risen to the top in numerous areas, namely computer vision (CV), speech recognition, natural language processing, etc. Whereas remote sensing (RS) possesses a number of unique challenges, primarily related to sensors and applications, inevitably RS draws from many of the same theories as CV; e.g., statistics, fusion, and machine learning, to name a few. This means that the RS community should be aware of, if not at the leading edge of, of advancements like DL. Herein, we provide the most comprehensive survey of state-of-the-art RS DL research. We also review recent new developments in the DL field that can be used in DL for RS. Namely, we focus on theories, tools and challenges for the RS community. Specifically, we focus on unsolved challenges and opportunities as it relates to (i) inadequate data sets, (ii) human-understandable solutions for modelling physical phenomena, (iii) Big Data, (iv) non-traditional heterogeneous data sources, (v) DL architectures and learning algorithms for spectral, spatial and temporal data, (vi) transfer learning, (vii) an improved theoretical understanding of DL systems, (viii) high barriers to entry, and (ix) training and optimizing the DL.Comment: 64 pages, 411 references. To appear in Journal of Applied Remote Sensin

    Advances in Radar Remote Sensing of Agricultural Crops: A Review

    Get PDF
    There are enormous advantages of a review article in the field of emerging technology like radar remote sensing applications in agriculture. This paper aims to report select recent advancements in the field of Synthetic Aperture Radar (SAR) remote sensing of crops. In order to make the paper comprehensive and more meaningful for the readers, an attempt has also been made to include discussion on various technologies of SAR sensors used for remote sensing of agricultural crops viz. basic SAR sensor, SAR interferometry (InSAR), SAR polarimetry (PolSAR) and polarimetric interferometry SAR (PolInSAR). The paper covers all the methodologies used for various agricultural applications like empirically based models, machine learning based models and radiative transfer theorem based models. A thorough literature review of more than 100 research papers indicates that SAR polarimetry can be used effectively for crop inventory and biophysical parameters estimation such are leaf area index, plant water content, and biomass but shown less sensitivity towards plant height as compared to SAR interferometry. Polarimetric SAR Interferometry is preferable for taking advantage of both SAR polarimetry and SAR interferometry. Numerous studies based upon multi-parametric SAR indicate that optimum selection of SAR sensor parameters enhances SAR sensitivity as a whole for various agricultural applications. It has been observed that researchers are widely using three models such are empirical, machine learning and radiative transfer theorem based models. Machine learning based models are identified as a better approach for crop monitoring using radar remote sensing data. It is expected that the review article will not only generate interest amongst the readers to explore and exploit radar remote sensing for various agricultural applications but also provide a ready reference to the researchers working in this field

    Post-failure evolution analysis of a rainfall-triggered landslide by multi-temporal interferometry SAR approaches integrated with geotechnical analysis

    Get PDF
    Persistent Scatterers Interferometry (PSI) represents one of the most powerful techniques for Earth's surface deformation processes' monitoring, especially for long-term evolution phenomena. In this work, a dataset of 34 TerraSAR-X StripMap images (October 2013–October 2014) has been processed by two PSI techniques - Coherent Pixel Technique-Temporal Sublook Coherence (CPT-TSC) and Small Baseline Subset (SBAS) - in order to study the evolution of a slow-moving landslide which occurred on February 23, 2012 in the Papanice hamlet (Crotone municipality, southern Italy) and induced by a significant rainfall event (185 mm in three days). The mass movement caused structural damage (buildings' collapse), and destruction of utility lines (gas, water and electricity) and roads. The results showed analogous displacement rates (30–40 mm/yr along the Line of Sight – LOS-of the satellite) with respect to the pre-failure phase (2008–2010) analyzed in previous works. Both approaches allowed detect the landslide-affected area, however the higher density of targets identified by means of CPT-TSC enabled to analyze in detail the slope behavior in order to design possible mitigation interventions. For this aim, a slope stability analysis has been carried out, considering the comparison between groundwater oscillations and time-series of displacement. Hence, the crucial role of the interaction between rainfall and groundwater level has been inferred for the landslide triggering. In conclusion, we showed that the integration of geotechnical and remote sensing approaches can be seen as the best practice to support stakeholders to design remedial works.Peer ReviewedPostprint (author's final draft

    Fine-Grained Object Recognition and Zero-Shot Learning in Remote Sensing Imagery

    Full text link
    Fine-grained object recognition that aims to identify the type of an object among a large number of subcategories is an emerging application with the increasing resolution that exposes new details in image data. Traditional fully supervised algorithms fail to handle this problem where there is low between-class variance and high within-class variance for the classes of interest with small sample sizes. We study an even more extreme scenario named zero-shot learning (ZSL) in which no training example exists for some of the classes. ZSL aims to build a recognition model for new unseen categories by relating them to seen classes that were previously learned. We establish this relation by learning a compatibility function between image features extracted via a convolutional neural network and auxiliary information that describes the semantics of the classes of interest by using training samples from the seen classes. Then, we show how knowledge transfer can be performed for the unseen classes by maximizing this function during inference. We introduce a new data set that contains 40 different types of street trees in 1-ft spatial resolution aerial data, and evaluate the performance of this model with manually annotated attributes, a natural language model, and a scientific taxonomy as auxiliary information. The experiments show that the proposed model achieves 14.3% recognition accuracy for the classes with no training examples, which is significantly better than a random guess accuracy of 6.3% for 16 test classes, and three other ZSL algorithms.Comment: G. Sumbul, R. G. Cinbis, S. Aksoy, "Fine-Grained Object Recognition and Zero-Shot Learning in Remote Sensing Imagery", IEEE Transactions on Geoscience and Remote Sensing (TGRS), in press, 201

    Radar satellite imagery for humanitarian response. Bridging the gap between technology and application

    Get PDF
    This work deals with radar satellite imagery and its potential to assist of humanitarian operations. As the number of displaced people annually increases, both hosting countries and relief organizations face new challenges which are often related to unclear situations and lack of information on the number and location of people in need, as well as their environments. It was demonstrated in numerous studies that methods of earth observation can deliver this important information for the management of crises, the organization of refugee camps, and the mapping of environmental resources and natural hazards. However, most of these studies make use of -high-resolution optical imagery, while the role of radar satellites is widely neglected. At the same time, radar sensors have characteristics which make them highly suitable for humanitarian response, their potential to capture images through cloud cover and at night in the first place. Consequently, they potentially allow quicker response in cases of emergencies than optical imagery. This work demonstrates the currently unused potential of radar imagery for the assistance of humanitarian operations by case studies which cover the information needs of specific emergency situations. They are thematically grouped into topics related to population, natural hazards and the environment. Furthermore, the case studies address different levels of scientific objectives: The main intention is the development of innovative techniques of digital image processing and geospatial analysis as an answer on the identified existing research gaps. For this reason, novel approaches are presented on the mapping of refugee camps and urban areas, the allocation of biomass and environmental impact assessment. Secondly, existing methods developed for radar imagery are applied, refined, or adapted to specifically demonstrate their benefit in a humanitarian context. This is done for the monitoring of camp growth, the assessment of damages in cities affected by civil war, and the derivation of areas vulnerable to flooding or sea-surface changes. Lastly, to foster the integration of radar images into existing operational workflows of humanitarian data analysis, technically simple and easily-adaptable approaches are suggested for the mapping of rural areas for vaccination campaigns, the identification of changes within and around refugee camps, and the assessment of suitable locations for groundwater drillings. While the studies provide different levels of technical complexity and novelty, they all show that radar imagery can largely contribute to the provision of a variety of information which is required to make solid decisions and to effectively provide help in humanitarian operations. This work furthermore demonstrates that radar images are more than just an alternative image source for areas heavily affected by cloud cover. In fact, what makes them valuable is their information content regarding the characteristics of surfaces, such as shape, orientation, roughness, size, height, moisture, or conductivity. All these give decisive insights about man-made and natural environments in emergency situations and cannot be provided by optical images Finally, the findings of the case studies are put into a larger context, discussing the observed potential and limitations of the presented approaches. The major challenges are summarized which need be addressed to make radar imagery more useful in humanitarian operations in the context of upcoming technical developments. New radar satellites and technological progress in the fields of machine learning and cloud computing will bring new opportunities. At the same time, this work demonstrated the large need for further research, as well as for the collaboration and transfer of knowledge and experiences between scientists, users and relief workers in the field. It is the first extensive scientific compilation of this topic and the first step for a sustainable integration of radar imagery into operational frameworks to assist humanitarian work and to contribute to a more efficient provision of help to those in need.Die vorliegende Arbeit beschäftigt sich mit bildgebenden Radarsatelliten und ihrem potenziellen Beitrag zur Unterstützung humanitärer Einsätze. Die jährlich zunehmende Zahl an vertriebenen oder geflüchteten Menschen stellt sowohl Aufnahmeländer als auch humanitäre Organisationen vor große Herausforderungen, da sie oft mit unübersichtlichen Verhältnissen konfrontiert sind. Effektives Krisenmanagement, die Planung und Versorgung von Flüchtlingslagern, sowie der Schutz der betroffenen Menschen erfordern jedoch verlässliche Angaben über Anzahl und Aufenthaltsort der Geflüchteten und ihrer natürlichen Umwelt. Die Bereitstellung dieser Informationen durch Satellitenbilder wurde bereits in zahlreichen Studien aufgezeigt. Sie beruhen in der Regel auf hochaufgelösten optischen Aufnahmen, während bildgebende Radarsatelliten bisher kaum Anwendung finden. Dabei verfügen gerade Radarsatelliten über Eigenschaften, die hilfreich für humanitäre Einsätze sein können, allen voran ihre Unabhängigkeit von Bewölkung oder Tageslicht. Dadurch ermöglichen sie in Krisenfällen verglichen mit optischen Satelliten eine schnellere Reaktion. Diese Arbeit zeigt das derzeit noch ungenutzte Potenzial von Radardaten zur Unterstützung humanitärer Arbeit anhand von Fallstudien auf, in denen konkrete Informationen für ausgewählte Krisensituationen bereitgestellt werden. Sie sind in die Themenbereiche Bevölkerung, Naturgefahren und Ressourcen aufgeteilt, adressieren jedoch unterschiedliche wissenschaftliche Ansprüche: Der Hauptfokus der Arbeit liegt auf der Entwicklung von innovativen Methoden zur Verarbeitung von Radarbildern und räumlichen Daten als Antwort auf den identifizierten Forschungsbedarf in diesem Gebiet. Dies wird anhand der Kartierung von Flüchtlingslagern zur Abschätzung ihrer Bevölkerung, zur Bestimmung von Biomasse, sowie zur Ermittlung des Umwelteinflusses von Flüchtlingslagern aufgezeigt. Darüber hinaus werden existierende oder erprobte Ansätze für die Anwendung im humanitären Kontext angepasst oder weiterentwickelt. Dies erfolgt im Rahmen von Fallstudien zur Dynamik von Flüchtlingslagern, zur Ermittlung von Schäden an Gebäuden in Kriegsgebieten, sowie zur Erkennung von Risiken durch Überflutung. Zuletzt soll die Integration von Radardaten in bereits existierende Abläufe oder Arbeitsroutinen in der humanitären Hilfe anhand technisch vergleichsweise einfacher Ansätze vorgestellt und angeregt werden. Als Beispiele dienen hier die radargestützte Kartierung von entlegenen Gebieten zur Unterstützung von Impfkampagnen, die Identifizierung von Veränderungen in Flüchtlingslagern, sowie die Auswahl geeigneter Standorte zur Grundwasserentnahme. Obwohl sich die Fallstudien hinsichtlich ihres Innovations- und Komplexitätsgrads unterscheiden, zeigen sie alle den Mehrwert von Radardaten für die Bereitstellung von Informationen, um schnelle und fundierte Planungsentscheidungen zu unterstützen. Darüber hinaus wird in dieser Arbeit deutlich, dass Radardaten für humanitäre Zwecke mehr als nur eine Alternative in stark bewölkten Gebieten sind. Durch ihren Informationsgehalt zur Beschaffenheit von Oberflächen, beispielsweise hinsichtlich ihrer Rauigkeit, Feuchte, Form, Größe oder Höhe, sind sie optischen Daten überlegen und daher für viele Anwendungsbereiche im Kontext humanitärer Arbeit besonders. Die in den Fallstudien gewonnenen Erkenntnisse werden abschließend vor dem Hintergrund von Vor- und Nachteilen von Radardaten, sowie hinsichtlich zukünftiger Entwicklungen und Herausforderungen diskutiert. So versprechen neue Radarsatelliten und technologische Fortschritte im Bereich der Datenverarbeitung großes Potenzial. Gleichzeitig unterstreicht die Arbeit einen großen Bedarf an weiterer Forschung, sowie an Austausch und Zusammenarbeit zwischen Wissenschaftlern, Anwendern und Einsatzkräften vor Ort. Die vorliegende Arbeit ist die erste umfassende Darstellung und wissenschaftliche Aufarbeitung dieses Themenkomplexes. Sie soll als Grundstein für eine langfristige Integration von Radardaten in operationelle Abläufe dienen, um humanitäre Arbeit zu unterstützen und eine wirksame Hilfe für Menschen in Not ermöglichen

    Artificial Intelligence in Geoscience and Remote Sensing

    Get PDF
    corecore