45 research outputs found

    A Deep Learning Framework in Selected Remote Sensing Applications

    Get PDF
    The main research topic is designing and implementing a deep learning framework applied to remote sensing. Remote sensing techniques and applications play a crucial role in observing the Earth evolution, especially nowadays, where the effects of climate change on our life is more and more evident. A considerable amount of data are daily acquired all over the Earth. Effective exploitation of this information requires the robustness, velocity and accuracy of deep learning. This emerging need inspired the choice of this topic. The conducted studies mainly focus on two European Space Agency (ESA) missions: Sentinel 1 and Sentinel 2. Images provided by the ESA Sentinel-2 mission are rapidly becoming the main source of information for the entire remote sensing community, thanks to their unprecedented combination of spatial, spectral and temporal resolution, as well as their open access policy. The increasing interest gained by these satellites in the research laboratory and applicative scenarios pushed us to utilize them in the considered framework. The combined use of Sentinel 1 and Sentinel 2 is crucial and very prominent in different contexts and different kinds of monitoring when the growing (or changing) dynamics are very rapid. Starting from this general framework, two specific research activities were identified and investigated, leading to the results presented in this dissertation. Both these studies can be placed in the context of data fusion. The first activity deals with a super-resolution framework to improve Sentinel 2 bands supplied at 20 meters up to 10 meters. Increasing the spatial resolution of these bands is of great interest in many remote sensing applications, particularly in monitoring vegetation, rivers, forests, and so on. The second topic of the deep learning framework has been applied to the multispectral Normalized Difference Vegetation Index (NDVI) extraction, and the semantic segmentation obtained fusing Sentinel 1 and S2 data. The S1 SAR data is of great importance for the quantity of information extracted in the context of monitoring wetlands, rivers and forests, and many other contexts. In both cases, the problem was addressed with deep learning techniques, and in both cases, very lean architectures were used, demonstrating that even without the availability of computing power, it is possible to obtain high-level results. The core of this framework is a Convolutional Neural Network (CNN). {CNNs have been successfully applied to many image processing problems, like super-resolution, pansharpening, classification, and others, because of several advantages such as (i) the capability to approximate complex non-linear functions, (ii) the ease of training that allows to avoid time-consuming handcraft filter design, (iii) the parallel computational architecture. Even if a large amount of "labelled" data is required for training, the CNN performances pushed me to this architectural choice.} In our S1 and S2 integration task, we have faced and overcome the problem of manually labelled data with an approach based on integrating these two different sensors. Therefore, apart from the investigation in Sentinel-1 and Sentinel-2 integration, the main contribution in both cases of these works is, in particular, the possibility of designing a CNN-based solution that can be distinguished by its lightness from a computational point of view and consequent substantial saving of time compared to more complex deep learning state-of-the-art solutions

    Modern GPR Target Recognition Methods

    Full text link
    Traditional GPR target recognition methods include pre-processing the data by removal of noisy signatures, dewowing (high-pass filtering to remove low-frequency noise), filtering, deconvolution, migration (correction of the effect of survey geometry), and can rely on the simulation of GPR responses. The techniques usually suffer from the loss of information, inability to adapt from prior results, and inefficient performance in the presence of strong clutter and noise. To address these challenges, several advanced processing methods have been developed over the past decade to enhance GPR target recognition. In this chapter, we provide an overview of these modern GPR processing techniques. In particular, we focus on the following methods: adaptive receive processing of range profiles depending on the target environment; adoption of learning-based methods so that the radar utilizes the results from prior measurements; application of methods that exploit the fact that the target scene is sparse in some domain or dictionary; application of advanced classification techniques; and convolutional coding which provides succinct and representatives features of the targets. We describe each of these techniques or their combinations through a representative application of landmine detection.Comment: Book chapter, 56 pages, 17 figures, 12 tables. arXiv admin note: substantial text overlap with arXiv:1806.0459

    Calibration of DART Radiative Transfer Model with Satellite Images for Simulating Albedo and Thermal Irradiance Images and 3D Radiative Budget of Urban Environment

    Get PDF
    Remote sensing is increasingly used for managing urban environment. In this context, the H2020 project URBANFLUXES aims to improve our knowledge on urban anthropogenic heat fluxes, with the specific study of three cities: London, Basel and Heraklion. Usually, one expects to derive directly 2 major urban parameters from remote sensing: the albedo and thermal irradiance. However, the determination of these two parameters is seriously hampered by complexity of urban architecture. For example, urban reflectance and brightness temperature are far from isotropic and are spatially heterogeneous. Hence, radiative transfer models that consider the complexity of urban architecture when simulating remote sensing signals are essential tools. Even for these sophisticated models, there is a major constraint for an operational use of remote sensing: the complex 3D distribution of optical properties and temperatures in urban environments. Here, the work is conducted with the DART (Discrete Anisotropic Radiative Transfer) model. It is a comprehensive physically based 3D radiative transfer model that simulates optical signals at the entrance of imaging spectro-radiometers and LiDAR scanners on board of satellites and airplanes, as well as the 3D radiative budget, of urban and natural landscapes for any experimental (atmosphere, topography,…) and instrumental (sensor altitude, spatial resolution, UV to thermal infrared,…) configuration. Paul Sabatier University distributes free licenses for research activities. This paper presents the calibration of DART model with high spatial resolution satellite images (Landsat 8, Sentinel 2, etc.) that are acquired in the visible (VIS) / near infrared (NIR) domain and in the thermal infrared (TIR) domain. Here, the work is conducted with an atmospherically corrected Landsat 8 image and Bale city, with its urban database. The calibration approach in the VIS/IR domain encompasses 5 steps for computing the 2D distribution (image) of urban albedo at satellite spatial resolution. (1) DART simulation of satellite image at very high spatial resolution (e.g., 50cm) per satellite spectral band. Atmosphere conditions are specific to the satellite image acquisition. (2) Spatial resampling of DART image at the coarser spatial resolution of the available satellite image, per spectral band. (3) Iterative derivation of the urban surfaces (roofs, walls, streets, vegetation,…) optical properties as derived from pixel-wise comparison of DART and satellite images, independently per spectral band. (4) Computation of the band albedo image of the city, per spectral band. (5) Computation of the image of the city albedo and VIS/NIR exitance, as an integral over all satellite spectral bands. In order to get a time series of albedo and VIS/NIR exitance, even in the absence of satellite images, ECMWF information about local irradiance and atmosphere conditions are used. A similar approach is used for calculating the city thermal exitance using satellite images acquired in the thermal infrared domain. Finally, DART simulations that are conducted with the optical properties derived from remote sensing images give also the 3D radiative budget of the city at any date including the date of the satellite image acquisition

    Radar satellite imagery for humanitarian response. Bridging the gap between technology and application

    Get PDF
    This work deals with radar satellite imagery and its potential to assist of humanitarian operations. As the number of displaced people annually increases, both hosting countries and relief organizations face new challenges which are often related to unclear situations and lack of information on the number and location of people in need, as well as their environments. It was demonstrated in numerous studies that methods of earth observation can deliver this important information for the management of crises, the organization of refugee camps, and the mapping of environmental resources and natural hazards. However, most of these studies make use of -high-resolution optical imagery, while the role of radar satellites is widely neglected. At the same time, radar sensors have characteristics which make them highly suitable for humanitarian response, their potential to capture images through cloud cover and at night in the first place. Consequently, they potentially allow quicker response in cases of emergencies than optical imagery. This work demonstrates the currently unused potential of radar imagery for the assistance of humanitarian operations by case studies which cover the information needs of specific emergency situations. They are thematically grouped into topics related to population, natural hazards and the environment. Furthermore, the case studies address different levels of scientific objectives: The main intention is the development of innovative techniques of digital image processing and geospatial analysis as an answer on the identified existing research gaps. For this reason, novel approaches are presented on the mapping of refugee camps and urban areas, the allocation of biomass and environmental impact assessment. Secondly, existing methods developed for radar imagery are applied, refined, or adapted to specifically demonstrate their benefit in a humanitarian context. This is done for the monitoring of camp growth, the assessment of damages in cities affected by civil war, and the derivation of areas vulnerable to flooding or sea-surface changes. Lastly, to foster the integration of radar images into existing operational workflows of humanitarian data analysis, technically simple and easily-adaptable approaches are suggested for the mapping of rural areas for vaccination campaigns, the identification of changes within and around refugee camps, and the assessment of suitable locations for groundwater drillings. While the studies provide different levels of technical complexity and novelty, they all show that radar imagery can largely contribute to the provision of a variety of information which is required to make solid decisions and to effectively provide help in humanitarian operations. This work furthermore demonstrates that radar images are more than just an alternative image source for areas heavily affected by cloud cover. In fact, what makes them valuable is their information content regarding the characteristics of surfaces, such as shape, orientation, roughness, size, height, moisture, or conductivity. All these give decisive insights about man-made and natural environments in emergency situations and cannot be provided by optical images Finally, the findings of the case studies are put into a larger context, discussing the observed potential and limitations of the presented approaches. The major challenges are summarized which need be addressed to make radar imagery more useful in humanitarian operations in the context of upcoming technical developments. New radar satellites and technological progress in the fields of machine learning and cloud computing will bring new opportunities. At the same time, this work demonstrated the large need for further research, as well as for the collaboration and transfer of knowledge and experiences between scientists, users and relief workers in the field. It is the first extensive scientific compilation of this topic and the first step for a sustainable integration of radar imagery into operational frameworks to assist humanitarian work and to contribute to a more efficient provision of help to those in need.Die vorliegende Arbeit beschäftigt sich mit bildgebenden Radarsatelliten und ihrem potenziellen Beitrag zur Unterstützung humanitärer Einsätze. Die jährlich zunehmende Zahl an vertriebenen oder geflüchteten Menschen stellt sowohl Aufnahmeländer als auch humanitäre Organisationen vor große Herausforderungen, da sie oft mit unübersichtlichen Verhältnissen konfrontiert sind. Effektives Krisenmanagement, die Planung und Versorgung von Flüchtlingslagern, sowie der Schutz der betroffenen Menschen erfordern jedoch verlässliche Angaben über Anzahl und Aufenthaltsort der Geflüchteten und ihrer natürlichen Umwelt. Die Bereitstellung dieser Informationen durch Satellitenbilder wurde bereits in zahlreichen Studien aufgezeigt. Sie beruhen in der Regel auf hochaufgelösten optischen Aufnahmen, während bildgebende Radarsatelliten bisher kaum Anwendung finden. Dabei verfügen gerade Radarsatelliten über Eigenschaften, die hilfreich für humanitäre Einsätze sein können, allen voran ihre Unabhängigkeit von Bewölkung oder Tageslicht. Dadurch ermöglichen sie in Krisenfällen verglichen mit optischen Satelliten eine schnellere Reaktion. Diese Arbeit zeigt das derzeit noch ungenutzte Potenzial von Radardaten zur Unterstützung humanitärer Arbeit anhand von Fallstudien auf, in denen konkrete Informationen für ausgewählte Krisensituationen bereitgestellt werden. Sie sind in die Themenbereiche Bevölkerung, Naturgefahren und Ressourcen aufgeteilt, adressieren jedoch unterschiedliche wissenschaftliche Ansprüche: Der Hauptfokus der Arbeit liegt auf der Entwicklung von innovativen Methoden zur Verarbeitung von Radarbildern und räumlichen Daten als Antwort auf den identifizierten Forschungsbedarf in diesem Gebiet. Dies wird anhand der Kartierung von Flüchtlingslagern zur Abschätzung ihrer Bevölkerung, zur Bestimmung von Biomasse, sowie zur Ermittlung des Umwelteinflusses von Flüchtlingslagern aufgezeigt. Darüber hinaus werden existierende oder erprobte Ansätze für die Anwendung im humanitären Kontext angepasst oder weiterentwickelt. Dies erfolgt im Rahmen von Fallstudien zur Dynamik von Flüchtlingslagern, zur Ermittlung von Schäden an Gebäuden in Kriegsgebieten, sowie zur Erkennung von Risiken durch Überflutung. Zuletzt soll die Integration von Radardaten in bereits existierende Abläufe oder Arbeitsroutinen in der humanitären Hilfe anhand technisch vergleichsweise einfacher Ansätze vorgestellt und angeregt werden. Als Beispiele dienen hier die radargestützte Kartierung von entlegenen Gebieten zur Unterstützung von Impfkampagnen, die Identifizierung von Veränderungen in Flüchtlingslagern, sowie die Auswahl geeigneter Standorte zur Grundwasserentnahme. Obwohl sich die Fallstudien hinsichtlich ihres Innovations- und Komplexitätsgrads unterscheiden, zeigen sie alle den Mehrwert von Radardaten für die Bereitstellung von Informationen, um schnelle und fundierte Planungsentscheidungen zu unterstützen. Darüber hinaus wird in dieser Arbeit deutlich, dass Radardaten für humanitäre Zwecke mehr als nur eine Alternative in stark bewölkten Gebieten sind. Durch ihren Informationsgehalt zur Beschaffenheit von Oberflächen, beispielsweise hinsichtlich ihrer Rauigkeit, Feuchte, Form, Größe oder Höhe, sind sie optischen Daten überlegen und daher für viele Anwendungsbereiche im Kontext humanitärer Arbeit besonders. Die in den Fallstudien gewonnenen Erkenntnisse werden abschließend vor dem Hintergrund von Vor- und Nachteilen von Radardaten, sowie hinsichtlich zukünftiger Entwicklungen und Herausforderungen diskutiert. So versprechen neue Radarsatelliten und technologische Fortschritte im Bereich der Datenverarbeitung großes Potenzial. Gleichzeitig unterstreicht die Arbeit einen großen Bedarf an weiterer Forschung, sowie an Austausch und Zusammenarbeit zwischen Wissenschaftlern, Anwendern und Einsatzkräften vor Ort. Die vorliegende Arbeit ist die erste umfassende Darstellung und wissenschaftliche Aufarbeitung dieses Themenkomplexes. Sie soll als Grundstein für eine langfristige Integration von Radardaten in operationelle Abläufe dienen, um humanitäre Arbeit zu unterstützen und eine wirksame Hilfe für Menschen in Not ermöglichen

    Deep Learning based data-fusion methods for remote sensing applications

    Get PDF
    In the last years, an increasing number of remote sensing sensors have been launched to orbit around the Earth, with a continuously growing production of massive data, that are useful for a large number of monitoring applications, especially for the monitoring task. Despite modern optical sensors provide rich spectral information about Earth's surface, at very high resolution, they are weather-sensitive. On the other hand, SAR images are always available also in presence of clouds and are almost weather-insensitive, as well as daynight available, but they do not provide a rich spectral information and are severely affected by speckle "noise" that make difficult the information extraction. For the above reasons it is worth and challenging to fuse data provided by different sources and/or acquired at different times, in order to leverage on their diversity and complementarity to retrieve the target information. Motivated by the success of the employment of Deep Learning methods in many image processing tasks, in this thesis it has been faced different typical remote sensing data-fusion problems by means of suitably designed Convolutional Neural Networks

    Very High Resolution (VHR) Satellite Imagery: Processing and Applications

    Get PDF
    Recently, growing interest in the use of remote sensing imagery has appeared to provide synoptic maps of water quality parameters in coastal and inner water ecosystems;, monitoring of complex land ecosystems for biodiversity conservation; precision agriculture for the management of soils, crops, and pests; urban planning; disaster monitoring, etc. However, for these maps to achieve their full potential, it is important to engage in periodic monitoring and analysis of multi-temporal changes. In this context, very high resolution (VHR) satellite-based optical, infrared, and radar imaging instruments provide reliable information to implement spatially-based conservation actions. Moreover, they enable observations of parameters of our environment at greater broader spatial and finer temporal scales than those allowed through field observation alone. In this sense, recent very high resolution satellite technologies and image processing algorithms present the opportunity to develop quantitative techniques that have the potential to improve upon traditional techniques in terms of cost, mapping fidelity, and objectivity. Typical applications include multi-temporal classification, recognition and tracking of specific patterns, multisensor data fusion, analysis of land/marine ecosystem processes and environment monitoring, etc. This book aims to collect new developments, methodologies, and applications of very high resolution satellite data for remote sensing. The works selected provide to the research community the most recent advances on all aspects of VHR satellite remote sensing

    Sustainable Agriculture and Advances of Remote Sensing (Volume 1)

    Get PDF
    Agriculture, as the main source of alimentation and the most important economic activity globally, is being affected by the impacts of climate change. To maintain and increase our global food system production, to reduce biodiversity loss and preserve our natural ecosystem, new practices and technologies are required. This book focuses on the latest advances in remote sensing technology and agricultural engineering leading to the sustainable agriculture practices. Earth observation data, in situ and proxy-remote sensing data are the main source of information for monitoring and analyzing agriculture activities. Particular attention is given to earth observation satellites and the Internet of Things for data collection, to multispectral and hyperspectral data analysis using machine learning and deep learning, to WebGIS and the Internet of Things for sharing and publishing the results, among others

    Hyperspectral Imaging for Fine to Medium Scale Applications in Environmental Sciences

    Get PDF
    The aim of the Special Issue “Hyperspectral Imaging for Fine to Medium Scale Applications in Environmental Sciences” was to present a selection of innovative studies using hyperspectral imaging (HSI) in different thematic fields. This intention reflects the technical developments in the last three decades, which have brought the capacity of HSI to provide spectrally, spatially and temporally detailed data, favoured by e.g., hyperspectral snapshot technologies, miniaturized hyperspectral sensors and hyperspectral microscopy imaging. The present book comprises a suite of papers in various fields of environmental sciences—geology/mineral exploration, digital soil mapping, mapping and characterization of vegetation, and sensing of water bodies (including under-ice and underwater applications). In addition, there are two rather methodically/technically-oriented contributions dealing with the optimized processing of UAV data and on the design and test of a multi-channel optical receiver for ground-based applications. All in all, this compilation documents that HSI is a multi-faceted research topic and will remain so in the future

    Advanced machine learning algorithms for Canadian wetland mapping using polarimetric synthetic aperture radar (PolSAR) and optical imagery

    Get PDF
    Wetlands are complex land cover ecosystems that represent a wide range of biophysical conditions. They are one of the most productive ecosystems and provide several important environmental functionalities. As such, wetland mapping and monitoring using cost- and time-efficient approaches are of great interest for sustainable management and resource assessment. In this regard, satellite remote sensing data are greatly beneficial, as they capture a synoptic and multi-temporal view of landscapes. The ability to extract useful information from satellite imagery greatly affects the accuracy and reliability of the final products. This is of particular concern for mapping complex land cover ecosystems, such as wetlands, where complex, heterogeneous, and fragmented landscape results in similar backscatter/spectral signatures of land cover classes in satellite images. Accordingly, the overarching purpose of this thesis is to contribute to existing methodologies of wetland classification by proposing and developing several new techniques based on advanced remote sensing tools and optical and Synthetic Aperture Radar (SAR) imagery. Specifically, the importance of employing an efficient speckle reduction method for polarimetric SAR (PolSAR) image processing is discussed and a new speckle reduction technique is proposed. Two novel techniques are also introduced for improving the accuracy of wetland classification. In particular, a new hierarchical classification algorithm using multi-frequency SAR data is proposed that discriminates wetland classes in three steps depending on their complexity and similarity. The experimental results reveal that the proposed method is advantageous for mapping complex land cover ecosystems compared to single stream classification approaches, which have been extensively used in the literature. Furthermore, a new feature weighting approach is proposed based on the statistical and physical characteristics of PolSAR data to improve the discrimination capability of input features prior to incorporating them into the classification scheme. This study also demonstrates the transferability of existing classification algorithms, which have been developed based on RADARSAT-2 imagery, to compact polarimetry SAR data that will be collected by the upcoming RADARSAT Constellation Mission (RCM). The capability of several well-known deep Convolutional Neural Network (CNN) architectures currently employed in computer vision is first introduced in this thesis for classification of wetland complexes using multispectral remote sensing data. Finally, this research results in the first provincial-scale wetland inventory maps of Newfoundland and Labrador using the Google Earth Engine (GEE) cloud computing resources and open access Earth Observation (EO) collected by the Copernicus Sentinel missions. Overall, the methodologies proposed in this thesis address fundamental limitations/challenges of wetland mapping using remote sensing data, which have been ignored in the literature. These challenges include the backscattering/spectrally similar signature of wetland classes, insufficient classification accuracy of wetland classes, and limitations of wetland mapping on large scales. In addition to the capabilities of the proposed methods for mapping wetland complexes, the use of these developed techniques for classifying other complex land cover types beyond wetlands, such as sea ice and crop ecosystems, offers a potential avenue for further research

    GEOBIA 2016 : Solutions and Synergies., 14-16 September 2016, University of Twente Faculty of Geo-Information and Earth Observation (ITC): open access e-book

    Get PDF
    corecore