10 research outputs found

    The SARptical Dataset for Joint Analysis of SAR and Optical Image in Dense Urban Area

    Get PDF
    The joint interpretation of very high resolution SAR and optical images in dense urban area are not trivial due to the distinct imaging geometry of the two types of images. Especially, the inevitable layover caused by the side-looking SAR imaging geometry renders this task even more challenging. Only until recently, the "SARptical" framework [1], [2] proposed a promising solution to tackle this. SARptical can trace individual SAR scatterers in corresponding high-resolution optical images, via rigorous 3-D reconstruction and matching. This paper introduces the SARptical dataset, which is a dataset of over 10,000 pairs of corresponding SAR, and optical image patches extracted from TerraSAR-X high-resolution spotlight images and aerial UltraCAM optical images. This dataset opens new opportunities of multisensory data analysis. One can analyze the geometry, material, and other properties of the imaged object in both SAR and optical image domain. More advanced applications such as SAR and optical image matching via deep learning [3] is now also possible.Comment: This manuscript was submitted to IGARSS 201

    The SEN1-2 Dataset for Deep Learning in SAR-Optical Data Fusion

    Get PDF
    While deep learning techniques have an increasing impact on many technical fields, gathering sufficient amounts of training data is a challenging problem in remote sensing. In particular, this holds for applications involving data from multiple sensors with heterogeneous characteristics. One example for that is the fusion of synthetic aperture radar (SAR) data and optical imagery. With this paper, we publish the SEN1-2 dataset to foster deep learning research in SAR-optical data fusion. SEN1-2 comprises 282,384 pairs of corresponding image patches, collected from across the globe and throughout all meteorological seasons. Besides a detailed description of the dataset, we show exemplary results for several possible applications, such as SAR image colorization, SAR-optical image matching, and creation of artificial optical images from SAR input data. Since SEN1-2 is the first large open dataset of this kind, we believe it will support further developments in the field of deep learning for remote sensing as well as multi-sensor data fusion.Comment: accepted for publication in the ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences (online from October 2018

    Deep learning in remote sensing: a review

    Get PDF
    Standing at the paradigm shift towards data-intensive science, machine learning techniques are becoming increasingly important. In particular, as a major breakthrough in the field, deep learning has proven as an extremely powerful tool in many fields. Shall we embrace deep learning as the key to all? Or, should we resist a 'black-box' solution? There are controversial opinions in the remote sensing community. In this article, we analyze the challenges of using deep learning for remote sensing data analysis, review the recent advances, and provide resources to make deep learning in remote sensing ridiculously simple to start with. More importantly, we advocate remote sensing scientists to bring their expertise into deep learning, and use it as an implicit general model to tackle unprecedented large-scale influential challenges, such as climate change and urbanization.Comment: Accepted for publication IEEE Geoscience and Remote Sensing Magazin

    HR-SAR-Net: A Deep Neural Network for Urban Scene Segmentation from High-Resolution SAR Data

    Full text link
    Synthetic aperture radar (SAR) data is becoming increasingly available to a wide range of users through commercial service providers with resolutions reaching 0.5m/px. Segmenting SAR data still requires skilled personnel, limiting the potential for large-scale use. We show that it is possible to automatically and reliably perform urban scene segmentation from next-gen resolution SAR data (0.15m/px) using deep neural networks (DNNs), achieving a pixel accuracy of 95.19% and a mean IoU of 74.67% with data collected over a region of merely 2.2km2{}^2. The presented DNN is not only effective, but is very small with only 63k parameters and computationally simple enough to achieve a throughput of around 500Mpx/s using a single GPU. We further identify that additional SAR receive antennas and data from multiple flights massively improve the segmentation accuracy. We describe a procedure for generating a high-quality segmentation ground truth from multiple inaccurate building and road annotations, which has been crucial to achieving these segmentation results

    THE SEN1-2 DATASET FOR DEEP LEARNING IN SAR-OPTICAL DATA FUSION

    Get PDF
    While deep learning techniques have an increasing impact on many technical fields, gathering sufficient amounts of training data is a challenging problem in remote sensing. In particular, this holds for applications involving data from multiple sensors with heterogeneous characteristics. One example for that is the fusion of synthetic aperture radar (SAR) data and optical imagery. With this paper, we publish the SEN1-2 dataset to foster deep learning research in SAR-optical data fusion. SEN1-2 comprises 282;384 pairs of corresponding image patches, collected from across the globe and throughout all meteorological seasons. Besides a detailed description of the dataset, we show exemplary results for several possible applications, such as SAR image colorization, SAR-optical image matching, and creation of artificial optical images from SAR input data. Since SEN1-2 is the first large open dataset of this kind, we believe it will support further developments in the field of deep learning for remote sensing as well as multi-sensor data fusion

    Deep Learning in Remote Sensing: A Comprehensive Review and List of Resources

    Get PDF
    Central to the looming paradigm shift toward data-intensive science, machine-learning techniques are becoming increasingly important. In particular, deep learning has proven to be both a major breakthrough and an extremely powerful tool in many fields. Shall we embrace deep learning as the key to everything? Or should we resist a black-box solution? These are controversial issues within the remote-sensing community. In this article, we analyze the challenges of using deep learning for remote-sensing data analysis, review recent advances, and provide resources we hope will make deep learning in remote sensing seem ridiculously simple. More importantly, we encourage remote-sensing scientists to bring their expertise into deep learning and use it as an implicit general model to tackle unprecedented, large-scale, influential challenges, such as climate change and urbanization

    Deep Vision in Optical Imagery: From Perception to Reasoning

    Get PDF
    Deep learning has achieved extraordinary success in a wide range of tasks in computer vision field over the past years. Remote sensing data present different properties as compared to natural images/videos, due to their unique imaging technique, shooting angle, etc. For instance, hyperspectral images usually have hundreds of spectral bands, offering additional information, and the size of objects (e.g., vehicles) in remote sensing images is quite limited, which brings challenges for detection or segmentation tasks. This thesis focuses on two kinds of remote sensing data, namely hyper/multi-spectral and high-resolution images, and explores several methods to try to find answers to the following questions: - In comparison with natural images or videos in computer vision, the unique asset of hyper/multi-spectral data is their rich spectral information. But what this “additional” information brings for learning a network? And how do we take full advantage of these spectral bands? - Remote sensing images at high resolution have pretty different characteristics, bringing challenges for several tasks, for example, small object segmentation. Can we devise tailored networks for such tasks? - Deep networks have produced stunning results in a variety of perception tasks, e.g., image classification, object detection, and semantic segmentation. While the capacity to reason about relations over space is vital for intelligent species. Can a network/module with the capacity of reasoning benefit to parsing remote sensing data? To this end, a couple of networks are devised to figure out what a network learns from hyperspectral images and how to efficiently use spectral bands. In addition, a multi-task learning network is investigated for the instance segmentation of vehicles from aerial images and videos. Finally, relational reasoning modules are designed to improve semantic segmentation of aerial images

    The SARptical dataset for joint analysis of SAR and optical image in dense urban area

    Get PDF
    The joint interpretation of very high resolution SAR and optical images in dense urban area are not trivial due to the distinct imaging geometry of the two types of images. Especially, the inevitable layover caused by the side-looking SAR imaging geometry renders this task even more challenging. Only until recently, the "SARptical" framework proposed a promising solution to tackle this. SARptical can trace individual SAR scatterers in corresponding high-resolution optical images, via rigorous 3-D reconstruction and matching. This paper introduces the SARptical dataset, which is a dataset of over 10,000 pairs of corresponding SAR, and optical image patches extracted from TerraSAR-X high-resolution spotlight images and aerial UltraCAM optical images. This dataset opens new opportunities of multisensory data analysis. One can analyze the geometry, material, and other properties of the imaged object in both SAR and optical image domain. More advanced applications such as SAR and optical image matching via deep learning is now also possible
    corecore