21 research outputs found

    Rain Rate Estimation with SAR using NEXRAD measurements with Convolutional Neural Networks

    Full text link
    Remote sensing of rainfall events is critical for both operational and scientific needs, including for example weather forecasting, extreme flood mitigation, water cycle monitoring, etc. Ground-based weather radars, such as NOAA's Next-Generation Radar (NEXRAD), provide reflectivity and precipitation measurements of rainfall events. However, the observation range of such radars is limited to a few hundred kilometers, prompting the exploration of other remote sensing methods, paricularly over the open ocean, that represents large areas not covered by land-based radars. For a number of decades, C-band SAR imagery such a such as Sentinel-1 imagery has been known to exhibit rainfall signatures over the sea surface. However, the development of SAR-derived rainfall products remains a challenge. Here we propose a deep learning approach to extract rainfall information from SAR imagery. We demonstrate that a convolutional neural network, such as U-Net, trained on a colocated and preprocessed Sentinel-1/NEXRAD dataset clearly outperforms state-of-the-art filtering schemes. Our results indicate high performance in segmenting precipitation regimes, delineated by thresholds at 1, 3, and 10 mm/h. Compared to current methods that rely on Koch filters to draw binary rainfall maps, these multi-threshold learning-based models can provide rainfall estimation for higher wind speeds and thus may be of great interest for data assimilation weather forecasting or for improving the qualification of SAR-derived wind field data.Comment: 25 pages, 10 figure

    An Interpretable Deep Semantic Segmentation Method for Earth Observation

    Get PDF
    Earth observation is fundamental for a range of human activities including flood response as it offers vital information to decision makers. Semantic segmentation plays a key role in mapping the raw hyper-spectral data coming from the satellites into a human understandable form assigning class labels to each pixel. Traditionally, water index based methods have been used for detecting water pixels. More recently, deep learning techniques such as U-Net started to gain attention offering significantly higher accuracy. However, the latter are hard to interpret by humans and use dozens of millions of abstract parameters that are not directly related to the physical nature of the problem being modelled. They are also labelled data and computational power hungry. At the same time, data transmission capability on small nanosatellites is limited in terms of power and bandwidth yet constellations of such small, nanosatellites are preferable, because they reduce the revisit time in disaster areas from days to hours. Therefore, being able to achieve as highly accurate models as deep learning (e.g. U-Net) or even more, to surpass them in terms of accuracy, but without the need to rely on huge amounts of labelled training data, computational power, abstract coefficients offers potentially game-changing capabilities for EO (Earth observation) and flood detection, in particular. In this paper, we introduce a prototype-based interpretable deep semantic segmentation (IDSS) method, which is highly accurate as well as interpretable. Its parameters are in orders of magnitude less than the number of parameters used by deep networks such as U-Net and are clearly interpretable by humans. The proposed here IDSS offers a transparent structure that allows users to inspect and audit the algorithm's decision. Results have demonstrated that IDSS could surpass other algorithms, including U-Net, in terms of IoU (Intersection over Union) total water and Recall total water. We used WorldFloods data set for our experiments and plan to use the semantic segmentation results combined with masks for permanent water to detect flood events

    KappaMask: AI-Based Cloudmask Processor for Sentinel-2

    Get PDF
    The Copernicus Sentinel-2 mission operated by the European Space Agency (ESA) provides comprehensive and continuous multi-spectral observations of all the Earth's land surface since mid-2015. Clouds and cloud shadows significantly decrease the usability of optical satellite data, especially in agricultural applications; therefore, an accurate and reliable cloud mask is mandatory for effective EO optical data exploitation. During the last few years, image segmentation techniques have developed rapidly with the exploitation of neural network capabilities. With this perspective, the KappaMask processor using U-Net architecture was developed with the ability to generate a classification mask over northern latitudes into the following classes: clear, cloud shadow, semi-transparent cloud (thin clouds), cloud and invalid. For training, a Sentinel-2 dataset covering the Northern European terrestrial area was labelled. KappaMask provides a 10 m classification mask for Sentinel-2 Level-2A (L2A) and Level-1C (L1C) products. The total dice coefficient on the test dataset, which was not seen by the model at any stage, was 80% for KappaMask L2A and 76% for KappaMask L1C for clear, cloud shadow, semi-transparent and cloud classes. A comparison with rule-based cloud mask methods was then performed on the same test dataset, where Sen2Cor reached 59% dice coefficient for clear, cloud shadow, semi-transparent and cloud classes, Fmask reached 61% for clear, cloud shadow and cloud classes and Maja reached 51% for clear and cloud classes. The closest machine learning open-source cloud classification mask, S2cloudless, had a 63% dice coefficient providing only cloud and clear classes, while KappaMask L2A, with a more complex classification schema, outperformed S2cloudless by 17%

    The ESA ΦSat-2 Mission: An A.I Enhanced Multispectral CubeSat for Earth Observation

    Get PDF
    As part of an initiative to promote the development and implementation of innovative technologies on-board Earth Observation (EO) missions, the European Space Agency (ESA) kicked off the first Φsat related activities in 2018 with the aim of enhancing the already ongoing FSSCAT project with Artificial Intelligence (AI). The selected Φsat-2 concept will provide a combination of on-board processing capabilities (including AI) and a medium to high resolution multispectral instrument from Visible to Near Infra-Red (VIS/NIR) able to acquire 8 bands (7 + Panchromatic) provided by SIMERA SENSE Europe (BE). These resources will be made available to a series of dedicated applications that will run on-board the spacecraft. The mission prime is Open Cosmos (UK), supported by CGI (IT) to coordinate the payload operations for at least 12 months after LEOP and commissioning phase. During the nominal phase the various AI applications will be fine-tuned after the on-ground training and then routinely run. A series of AI applications that could be potentially embarked are under development. The first one is called SAT2MAP and is expected to autonomously detect streets from acquired images. It is developed by CGI (IT). The second AI application is an enhancement of the Φsat-1 cloud detection experiment, able to prioritize data to be downloaded to ground, based on standard cloud coverage and new concentration measurements. It is developed by KP Labs (PL) and it is based on a U-Ne. This application will mainly act as an on-board service for the other applications, relieving them of the task of assessing the presence of the clouds. The Autonomous Vessel Awareness application aims to detect and classify various vessel types in the maritime domain. This would enable a reduced amount of data to be downloaded (only image patches including the vessel) improving the response time for final users (e.g maritime authorities). In this case the AI technique used is a combination of Single Image Super resolution (SRCNN) and Yolo-based Convoluted Neural Network (CNN). The Deep Compression application generically reduces the amount of data to be downloaded to ground with limited information loss. The image is compressed on-board and then reconstructed on ground by means of a decoder. It can achieve a compression rate of about 7 per band. It is based on the use of a Convolutional Auto Encoder (CAE). Two more AI applications will be selected by ESA through a dedicated challenge open to institutions, Agencies and industries that will be run in the first half of 2023. The Φsat-2 mission successfully passed the CDR phase at the end of 2022 aiming for a launch in 2024

    Apport de l imagerie SAR satellitaire en bandes L et C pour la caractérisation du couvert neigeux

    No full text
    Cette thèse traite de l'apport de l'imagerie SAR satellitaire en bandes L et C pour la caractérisation du couvert neigeux. Un modèle électromagnétique (EM) permettant de simuler la rétrodiffusion de l'onde sur un couvert neigeux a été développé. Ce modèle prend en considération la structure verticale du manteau neigeux ainsi que l'état de métamorphose des différentes couches. Il est validé à l'aide de profils stratigraphiques mesurés et des données SAR acquises parallèlement par le capteur ASAR/ENVISAT en 2004. L'originalité principale de cette étude consiste en l'association des données SAR à polarisation double avec le modèle météorologique Crocus développé par Météo-France. Dans le but de caractériser la variabilité spatiale des couverts neigeux alpins, les profils stratigraphiques Crocus sont réorganisés spatialement par le biais d'une optimisation de la réponse EM en bande C. Des cartographies du couvert neigeux sont réalisées avec une résolution métrique pour les massifs alpins des Grandes Rousses et de l'Oisans. Finalement, le potentiel des données polarimétriques en bande L pour la caractérisation de la neige est étudié sur des zones rurales. Une méthode de cartographie basée sur les Machines à Vecteurs Supports est réalisée puis testée avec des données acquises par le capteur PALSAR/ALOS.This thesis concerns snow remote sensing using spaceborne SAR imagery at L- and C-Bands. An electromagnetic (EM) backscattering model is developed to calculate radar backscatter from snow cover. This model takes into consideration both the vertical snowpack structure and the metamorphosis state of each snow layer. It is validated using in situ snow profiles and SAR data simultaneously acquired by the ASAR/ENVISAT sensor in 2004. The main contribution of this study consists in the combination of dual-polarization SAR data with the meteorological Crocus model developed by Météo-France. To characterize the variability of alpine snowpack, Crocus snow profiles are spatially reorganized by minimizing the difference between simulated and measured C-Band SAR data. Snow characteristics maps have been created at SAR resolution level for the French massifs Grandes Rousses'' and Oisans''. The potential of polarimetric L-Band SAR data for snow characterization is investigated in rural areas. A classification method based on Support Vector Machine techniques is developed and evaluated with SAR data acquired by the PALSAR/ALOS sensor.RENNES1-BU Sciences Philo (352382102) / SudocSudocFranceF

    Sea Surface Roughness and Forward Radar Propagation Modeling.

    No full text
    Soumission d'un résumé. Conférence : IEEE APS & USNC/URSI, San Diego, 2008-0

    Surveillance de la cryosphère par le biais du radar Japonais PALSAR/ALOS

    No full text
    Publication d'un résumé. http://www.ifr-2008.orgInternational audienc

    IMAFD: An Interpretable Multi-stage Approach to Flood Detection from time series Multispectral Data

    No full text
    In this paper, we address two critical challenges in the domain of flood detection: the computational expense of large-scale time series change detection and the lack of interpretable decision-making processes on explainable AI (XAI). To overcome these challenges, we proposed an interpretable multi-stage approach to flood detection, IMAFD has been proposed. It provides an automatic, efficient and interpretable solution suitable for large-scale remote sensing tasks and offers insight into the decision-making process. The proposed IMAFD approach combines the analysis of the dynamic time series image sequences to identify images with possible flooding with the static, within-image semantic segmentation. It combines anomaly detection (at both image and pixel level) with semantic segmentation. The flood detection problem is addressed through four stages: (1) at a sequence level: identifying the suspected images (2) at a multi-image level: detecting change within suspected images (3) at an image level: semantic segmentation of images into Land, Water or Cloud class (4) decision making. Our contributions are two folder. First, we efficiently reduced the number of frames to be processed for dense change detection by providing a multi-stage holistic approach to flood detection. Second, the proposed semantic change detection method (stage 3) provides human users with an interpretable decision-making process, while most of the explainable AI (XAI) methods provide post hoc explanations. The evaluation of the proposed IMAFD framework was performed on three datasets, WorldFloods, RavAEn and MediaEval. For all the above datasets, the proposed framework demonstrates a competitive performance compared to other methods offering also interpretability and insight

    Co-Cross-Polarization Coherence Over the Sea Surface From Sentinel-1 SAR Data: Perspectives for Mission Calibration and Wind Field Retrieval

    No full text
    Spaceborne synthetic aperture radar (SAR) has been used for years to estimate high-resolution surface wind field from the ocean surface backscattered signal. Current SAR platforms have one single fixed antenna, and traditional inversion/retrieval schemes rely on one copolarized channel, leading to an unconstrained optimization problem for providing independent estimates of wind speed and direction. For routine application, this is generally solved with a priori information from the numerical weather prediction (NWP) model, inducing severe limitations for rapidly evolving meteorological systems where discrepancies can be significant between model and measurements. In this study, we investigate the benefit of having two simultaneous acquisitions with phase-preserving information in copolarization and cross polarization provided by Sentinel-1 (S-1). A comprehensive analysis of the co-cross-polarization coherence (CCPC) is performed to adequately estimate and calibrate CCPC values from S-1 interferometric wide (IW) mode images acquired over the ocean. A new polarimetric calibration (PolCAL) methodology based on least-squares (LS) criterion and direct matrix inversion is proposed yielding crosstalk estimates. We document CCPC odd symmetry with respect to relative wind direction for light to medium wind speeds (up to 14 m/s) and incidence angle from 30x00B0; to 45x00B0;. The azimuthal modulation is found to increase with both wind speed and incidence angle. An analytical model C-band polarimetric geophysical model function (CPGMF) is provided. The synergy of the CCPC with other radar parameters, such as backscattering coefficients or Doppler, to further constrain the inversion scheme is assessed, opening new perspectives for SAR-based wind field retrieval independent of any NWP model information
    corecore