29 research outputs found

    Diffusion Models for Interferometric Satellite Aperture Radar

    Full text link
    Probabilistic Diffusion Models (PDMs) have recently emerged as a very promising class of generative models, achieving high performance in natural image generation. However, their performance relative to non-natural images, like radar-based satellite data, remains largely unknown. Generating large amounts of synthetic (and especially labelled) satellite data is crucial to implement deep-learning approaches for the processing and analysis of (interferometric) satellite aperture radar data. Here, we leverage PDMs to generate several radar-based satellite image datasets. We show that PDMs succeed in generating images with complex and realistic structures, but that sampling time remains an issue. Indeed, accelerated sampling strategies, which work well on simple image datasets like MNIST, fail on our radar datasets. We provide a simple and versatile open-source https://github.com/thomaskerdreux/PDM_SAR_InSAR_generation to train, sample and evaluate PDMs using any dataset on a single GPU

    Autonomous Detection of Methane Emissions in Multispectral Satellite Data Using Deep Learning

    Full text link
    Methane is one of the most potent greenhouse gases, and its short atmospheric half-life makes it a prime target to rapidly curb global warming. However, current methane emission monitoring techniques primarily rely on approximate emission factors or self-reporting, which have been shown to often dramatically underestimate emissions. Although initially designed to monitor surface properties, satellite multispectral data has recently emerged as a powerful method to analyze atmospheric content. However, the spectral resolution of multispectral instruments is poor, and methane measurements are typically very noisy. Methane data products are also sensitive to absorption by the surface and other atmospheric gases (water vapor in particular) and therefore provide noisy maps of potential methane plumes, that typically require extensive human analysis. Here, we show that the image recognition capabilities of deep learning methods can be leveraged to automatize the detection of methane leaks in Sentinel-2 satellite multispectral data, with dramatically reduced false positive rates compared with state-of-the-art multispectral methane data products, and without the need for a priori knowledge of potential leak sites. Our proposed approach paves the way for the automated, high-definition and high-frequency monitoring of point-source methane emissions across the world

    Autonomous Extraction of Millimeter-scale Deformation in InSAR Time Series Using Deep Learning

    Full text link
    Systematic characterization of slip behaviours on active faults is key to unraveling the physics of tectonic faulting and the interplay between slow and fast earthquakes. Interferometric Synthetic Aperture Radar (InSAR), by enabling measurement of ground deformation at a global scale every few days, may hold the key to those interactions. However, atmospheric propagation delays often exceed ground deformation of interest despite state-of-the art processing, and thus InSAR analysis requires expert interpretation and a priori knowledge of fault systems, precluding global investigations of deformation dynamics. Here we show that a deep auto-encoder architecture tailored to untangle ground deformation from noise in InSAR time series autonomously extracts deformation signals, without prior knowledge of a fault's location or slip behaviour. Applied to InSAR data over the North Anatolian Fault, our method reaches 2 mm detection, revealing a slow earthquake twice as extensive as previously recognized. We further explore the generalization of our approach to inflation/deflation-induced deformation, applying the same methodology to the geothermal field of Coso, California
    corecore