86 research outputs found

    Multi-temporal speckle reduction with self-supervised deep neural networks

    Full text link
    Speckle filtering is generally a prerequisite to the analysis of synthetic aperture radar (SAR) images. Tremendous progress has been achieved in the domain of single-image despeckling. Latest techniques rely on deep neural networks to restore the various structures and textures peculiar to SAR images. The availability of time series of SAR images offers the possibility of improving speckle filtering by combining different speckle realizations over the same area. The supervised training of deep neural networks requires ground-truth speckle-free images. Such images can only be obtained indirectly through some form of averaging, by spatial or temporal integration, and are imperfect. Given the potential of very high quality restoration reachable by multi-temporal speckle filtering, the limitations of ground-truth images need to be circumvented. We extend a recent self-supervised training strategy for single-look complex SAR images, called MERLIN, to the case of multi-temporal filtering. This requires modeling the sources of statistical dependencies in the spatial and temporal dimensions as well as between the real and imaginary components of the complex amplitudes. Quantitative analysis on datasets with simulated speckle indicates a clear improvement of speckle reduction when additional SAR images are included. Our method is then applied to stacks of TerraSAR-X images and shown to outperform competing multi-temporal speckle filtering approaches. The code of the trained models is made freely available on the Gitlab of the IMAGES team of the LTCI Lab, T\'el\'ecom Paris Institut Polytechnique de Paris (https://gitlab.telecom-paris.fr/ring/multi-temporal-merlin/)

    An icon-based synoptic visualization of fully polarimetric radar data

    Get PDF
    The visualization of fully polarimetric radar data is hindered by traditional remote sensing methodologies for displaying data due to the large number of parameters per pixel in such data, and the non-scalar nature of variables such as phase difference. In this paper, a new method is described that uses icons instead of image pixels to represent the image data so that polarimetric properties and geographic context can be visualized together. The icons are parameterized using the alpha-entropy decomposition of polarimetric data. The resulting image allows the following five variables to be displayed simultaneously: unpolarized power, alpha angle, polarimetric entropy, anisotropy and orientation angle. Examples are given for both airborne and laboratory-based imaging

    Application of RADARSAT-2 Polarimetric Data for Land Use and Land Cover Classification and Crop monitoring in Southwestern Ontario

    Get PDF
    Timely and accurate information of land surfaces is desirable for land change detection and crop condition monitoring. Optical data have been widely used in Land Use and Land Cover (LU/LC) mapping and crop condition monitoring. However, due to unfavorable weather conditions, high quality optical images are not always available. Synthetic Aperture Radar (SAR) sensors, such as RADARSAT-2, are able to transmit microwaves through cloud cover and light rain, and thus offer an alternative data source. This study investigates the potential of multi-temporal polarimetric RADARSAT-2 data for LU/LC classification and crop monitoring in the urban rural fringe areas of London, Ontario. Nine LU/LC classes were identified with a high overall accuracy of 91.0%. Also, high correlations have been found within the corn and soybean fields between some polarimetric parameters and Normalized Difference Vegetation Index (NDVI). The results demonstrate the capability of RADARSAT-2 in LU/LC classification and crop condition monitoring
    corecore