29 research outputs found

    Caractérisation de la bande riveraine en milieu agricole à l’aide de réseaux de neurones convolutifs profonds multi-vues (MVDCNN) et d’images satellites

    Get PDF
    En raison des fonctions écologiques qu’elles remplissent, les bandes riveraines végétalisées sont une solution fréquemment employée pour faire face aux problèmes de pollution diffuse en milieu agricole. Or, elles subissent des pressions importantes de nature anthropique qui justifient un suivi de leur état. La caractérisation des bandes riveraines est le plus souvent réalisée par des mesures in situ d’indices de qualité, ce qui mobilise des ressources importantes lorsqu’un vaste territoire est à couvrir et limite la récurrence du suivi. L’objectif global de cette recherche était donc de proposer une approche pour la caractérisation des bandes riveraines en milieu agricole qui est accessible et facilement applicable à l’échelle régionale. Elle s’inspire des récentes avancées en classification d’images apportées par le développement des réseaux de neurones convolutifs profonds (DCNN) et vise à évaluer la capacité de cette technologie pour la détermination de l’indice de qualité des bandes riveraines (IQBR) directement à partir d’images satellites. Pour ce faire, la méthode proposée fait appel aux images multispectrales à très haute résolution spatiale de la constellation Pléiades. De par leur forme étroite et irrégulière, les bandes riveraines ne peuvent être entièrement couvertes par une seule image sans que cette dernière image y intègre des éléments trop éloignés de la bande riveraine. Conséquemment, une architecture de DCNN multi-vues (MVDCNN) modifiée permettant l’utilisation de plusieurs images en entrée de réseau a été entrainée pour établir une corrélation entre l’IQBR mesuré sur le terrain et un ensemble d’images représentant la bande riveraine. Spécifiquement, différents nombres d’images en entrée du réseau multi-vues, sept combinaisons de bandes spectrales des images Pléiades et deux modes d’entrainement ont été testés, soit l’entrainement à partir de zéro et le fine tuning d’un réseau préentrainé. Les résultats démontrent qu’avec l’utilisation des bandes spectrales RGB, l’architecture MVDCNN incorporant un RESNET-18 préentrainé parvient à établir la meilleure corrélation (R2 de 0,932) entre les images de la bande riveraine et son IQBR. De plus, l’emploi de plusieurs images en entrée de réseau lors de l’entrainement a amélioré les résultats par rapport à l’utilisation d’une seule image

    Unsupervised Single-Scene Semantic Segmentation for Earth Observation

    Get PDF
    Earth observation data have huge potential to enrich our knowledge about our planet. An important step in many Earth observation tasks is semantic segmentation. Generally, a large number of pixelwise labeled images are required to train deep models for supervised semantic segmentation. On the contrary, strong intersensor and geographic variations impede the availability of annotated training data in Earth observation. In practice, most Earth observation tasks use only the target scene without assuming availability of any additional scene, labeled or unlabeled. Keeping in mind such constraints, we propose a semantic segmentation method that learns to segment from a single scene, without using any annotation. Earth observation scenes are generally larger than those encountered in typical computer vision datasets. Exploiting this, the proposed method samples smaller unlabeled patches from the scene. For each patch, an alternate view is generated by simple transformations, e.g., addition of noise. Both views are then processed through a two-stream network and weights are iteratively refined using deep clustering, spatial consistency, and contrastive learning in the pixel space. The proposed model automatically segregates the major classes present in the scene and produces the segmentation map. Extensive experiments on four Earth observation datasets collected by different sensors show the effectiveness of the proposed method. Implementation is available at https://gitlab.lrz.de/ai4eo/cd/-/tree/main/unsupContrastiveSemanticSeg

    Multi-source Remote Sensing for Forest Characterization and Monitoring

    Full text link
    As a dominant terrestrial ecosystem of the Earth, forest environments play profound roles in ecology, biodiversity, resource utilization, and management, which highlights the significance of forest characterization and monitoring. Some forest parameters can help track climate change and quantify the global carbon cycle and therefore attract growing attention from various research communities. Compared with traditional in-situ methods with expensive and time-consuming field works involved, airborne and spaceborne remote sensors collect cost-efficient and consistent observations at global or regional scales and have been proven to be an effective way for forest monitoring. With the looming paradigm shift toward data-intensive science and the development of remote sensors, remote sensing data with higher resolution and diversity have been the mainstream in data analysis and processing. However, significant heterogeneities in the multi-source remote sensing data largely restrain its forest applications urging the research community to come up with effective synergistic strategies. The work presented in this thesis contributes to the field by exploring the potential of the Synthetic Aperture Radar (SAR), SAR Polarimetry (PolSAR), SAR Interferometry (InSAR), Polarimetric SAR Interferometry (PolInSAR), Light Detection and Ranging (LiDAR), and multispectral remote sensing in forest characterization and monitoring from three main aspects including forest height estimation, active fire detection, and burned area mapping. First, the forest height inversion is demonstrated using airborne L-band dual-baseline repeat-pass PolInSAR data based on modified versions of the Random Motion over Ground (RMoG) model, where the scattering attenuation and wind-derived random motion are described in conditions of homogeneous and heterogeneous volume layer, respectively. A boreal and a tropical forest test site are involved in the experiment to explore the flexibility of different models over different forest types and based on that, a leveraging strategy is proposed to boost the accuracy of forest height estimation. The accuracy of the model-based forest height inversion is limited by the discrepancy between the theoretical models and actual scenarios and exhibits a strong dependency on the system and scenario parameters. Hence, high vertical accuracy LiDAR samples are employed to assist the PolInSAR-based forest height estimation. This multi-source forest height estimation is reformulated as a pan-sharpening task aiming to generate forest heights with high spatial resolution and vertical accuracy based on the synergy of the sparse LiDAR-derived heights and the information embedded in the PolInSAR data. This process is realized by a specifically designed generative adversarial network (GAN) allowing high accuracy forest height estimation less limited by theoretical models and system parameters. Related experiments are carried out over a boreal and a tropical forest to validate the flexibility of the method. An automated active fire detection framework is proposed for the medium resolution multispectral remote sensing data. The basic part of this framework is a deep-learning-based semantic segmentation model specifically designed for active fire detection. A dataset is constructed with open-access Sentinel-2 imagery for the training and testing of the deep-learning model. The developed framework allows an automated Sentinel-2 data download, processing, and generation of the active fire detection results through time and location information provided by the user. Related performance is evaluated in terms of detection accuracy and processing efficiency. The last part of this thesis explored whether the coarse burned area products can be further improved through the synergy of multispectral, SAR, and InSAR features with higher spatial resolutions. A Siamese Self-Attention (SSA) classification is proposed for the multi-sensor burned area mapping and a multi-source dataset is constructed at the object level for the training and testing. Results are analyzed by different test sites, feature sources, and classification methods to assess the improvements achieved by the proposed method. All developed methods are validated with extensive processing of multi-source data acquired by Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR), Land, Vegetation, and Ice Sensor (LVIS), PolSARproSim+, Sentinel-1, and Sentinel-2. I hope these studies constitute a substantial contribution to the forest applications of multi-source remote sensing

    A Comprehensive Survey of Deep Learning in Remote Sensing: Theories, Tools and Challenges for the Community

    Full text link
    In recent years, deep learning (DL), a re-branding of neural networks (NNs), has risen to the top in numerous areas, namely computer vision (CV), speech recognition, natural language processing, etc. Whereas remote sensing (RS) possesses a number of unique challenges, primarily related to sensors and applications, inevitably RS draws from many of the same theories as CV; e.g., statistics, fusion, and machine learning, to name a few. This means that the RS community should be aware of, if not at the leading edge of, of advancements like DL. Herein, we provide the most comprehensive survey of state-of-the-art RS DL research. We also review recent new developments in the DL field that can be used in DL for RS. Namely, we focus on theories, tools and challenges for the RS community. Specifically, we focus on unsolved challenges and opportunities as it relates to (i) inadequate data sets, (ii) human-understandable solutions for modelling physical phenomena, (iii) Big Data, (iv) non-traditional heterogeneous data sources, (v) DL architectures and learning algorithms for spectral, spatial and temporal data, (vi) transfer learning, (vii) an improved theoretical understanding of DL systems, (viii) high barriers to entry, and (ix) training and optimizing the DL.Comment: 64 pages, 411 references. To appear in Journal of Applied Remote Sensin

    Rice crop classification and yield estimation using multi-temporal sentinel-2 data: a case study of Terai districts of Nepal

    Get PDF
    Dissertation submitted in partial fulfilment of the requirements for the degree of Master of Science in Geospatial TechnologiesCrop monitoring, especially in developing countries, can improve food production, address food security issues, and support sustainable development goals. Crop type mapping and yield estimation are the two major aspects of crop monitoring that remain challenging due to the problem of timely and adequate data availability. Existing approaches rely on ground-surveys and traditional means which are time-consuming and costly. In this context, we introduce the use of freely available Sentinel-2 (S2) imagery with high spatial, spectral and temporal resolution to classify crop and estimate its yield through a deep learning approach. In particular, this study uses patch-based 2D and 3D Convolutional Neural Network (CNN) algorithms to map rice crop and predict its yield in the Terai districts of Nepal. Firstly, the study reviews the existing state-of-art technologies in this field and selects suitable CNN architectures. Secondly, the selected architectures are implemented and trained using S2 imagery, groundtruth and auxiliary data in addition for yield estimation.We also introduce a variation in the chosen 3D CNN architecture to enhance its performance in estimating rice yield. The performance of the models is validated and then evaluated using performance metrics namely overall accuracy and F1-score for classification and Root Mean Squared Error (RMSE) for yield estimation. In consistency with the existing works, the results demonstrate recommendable performance of the models with remarkable accuracy, indicating the suitability of S2 data for crop mapping and yield estimation in developing countries. Reproducibility self-assessment (https://osf.io/j97zp/): 2, 2, 2, 1, 2 (input data, preprocessing, methods, computational environment, results)

    Deep Learning Meets Hyperspectral Image Analysis: A Multidisciplinary Review

    Get PDF
    Modern hyperspectral imaging systems produce huge datasets potentially conveying a great abundance of information; such a resource, however, poses many challenges in the analysis and interpretation of these data. Deep learning approaches certainly offer a great variety of opportunities for solving classical imaging tasks and also for approaching new stimulating problems in the spatial–spectral domain. This is fundamental in the driving sector of Remote Sensing where hyperspectral technology was born and has mostly developed, but it is perhaps even more true in the multitude of current and evolving application sectors that involve these imaging technologies. The present review develops on two fronts: on the one hand, it is aimed at domain professionals who want to have an updated overview on how hyperspectral acquisition techniques can combine with deep learning architectures to solve specific tasks in different application fields. On the other hand, we want to target the machine learning and computer vision experts by giving them a picture of how deep learning technologies are applied to hyperspectral data from a multidisciplinary perspective. The presence of these two viewpoints and the inclusion of application fields other than Remote Sensing are the original contributions of this review, which also highlights some potentialities and critical issues related to the observed development trends
    corecore