389 research outputs found
Sentinel 1 and 2: Searching for Physical Image Content
ESA’s upcoming Sentinel-1 and Sentinel-2 missions open new perspectives for the application-oriented use of SAR and/or optical multispectral images. We expect short and regular revisit times as well as easily available and well documented products with attractive features such as cross-polarized SAR images and optical images delivered, for instance, as spectral reflectance data.
Thus, users do not have to live any longer with simple digital units or detector counts; instead, the data provided as Sentinel products can be understood as samples of calibrated and validated physical quantities. As a consequence, users can concentrate immediately on the physics and quantitative details of the observable phenomena.
This also affects content-based image retrieval, where a user searches for images containing phenomena being similar to given examples. While retrieval systems based on visible image data can only exploit characteristic shapes or patterns, the use of Sentinel data will address the determination of real physical relationships. In particular, this allows a physics-based analysis of image time series data, where one analyzes spatio-temporal phenomena.
This physics-based approach will allow us to employ content-based image retrieval as an attractive tool for the analysis of SAR and optical images
Knowledge extraction from Copernicus satellite data
We describe two alternative approaches of how to extract knowledge from high- and medium-resolution Synthetic Aperture Radar (SAR) images of the European Sentinel-1 satellites. To this end, we selected two basic types of images, namely images depicting arctic shipping routes with icebergs, and - in contrast - coastal areas with various types of land use and human-made facilities. In both cases, the extracted knowledge is delivered as (semantic) categories (i.e., local content labels) of adjacent image patches from big SAR images. Then, machine learning strategies helped us design and validate two automated knowledge extraction systems that can be extended for the understanding of multispectral satellite images
Deep Learning Training and Benchmarks for Earth Observation Images: Data Sets, Features, and Procedures
Deep learning methods are often used for image classification or local object segmentation. The corresponding test and validation data sets are an integral part of the learning process and also of the algorithm performance evaluation. High and particularly very high-resolution Earth observation (EO) applications based on satellite images primarily aim at the semantic labeling of land cover structures or objects as well as of temporal evolution classes. However, one of the main EO objectives is physical parameter retrievals such as temperatures, precipitation, and crop yield predictions. Therefore, we need reliably labeled data sets and tools to train the developed algorithms and to assess the performance of our deep learning paradigms. Generally, imaging sensors generate a visually understandable representation of the observed scene. However, this does not hold for many EO images, where the recorded images only depict a spectral subset of the scattered light field, thus generating an indirect signature of the imaged object. This spots the load of EO image understanding, as a new and particular challenge of Machine Learning (ML) and Artificial Intelligence (AI). This chapter reviews and analyses the new approaches of EO imaging leveraging the recent advances in physical process-based ML and AI methods and signal processing
Quantum Transfer Learning for Real-World, Small, and High-Dimensional Datasets
Quantum machine learning (QML) networks promise to have some computational
(or quantum) advantage for classifying supervised datasets (e.g., satellite
images) over some conventional deep learning (DL) techniques due to their
expressive power via their local effective dimension. There are, however, two
main challenges regardless of the promised quantum advantage: 1) Currently
available quantum bits (qubits) are very small in number, while real-world
datasets are characterized by hundreds of high-dimensional elements (i.e.,
features). Additionally, there is not a single unified approach for embedding
real-world high-dimensional datasets in a limited number of qubits. 2) Some
real-world datasets are too small for training intricate QML networks. Hence,
to tackle these two challenges for benchmarking and validating QML networks on
real-world, small, and high-dimensional datasets in one-go, we employ quantum
transfer learning composed of a multi-qubit QML network, and a very deep
convolutional network (a with VGG16 architecture) extracting informative
features from any small, high-dimensional dataset. We use real-amplitude and
strongly-entangling N-layer QML networks with and without data re-uploading
layers as a multi-qubit QML network, and evaluate their expressive power
quantified by using their local effective dimension; the lower the local
effective dimension of a QML network, the better its performance on unseen
data. Our numerical results show that the strongly-entangling N-layer QML
network has a lower local effective dimension than the real-amplitude QML
network and outperforms it on the hard-to-classify three-class labelling
problem. In addition, quantum transfer learning helps tackle the two challenges
mentioned above for benchmarking and validating QML networks on real-world,
small, and high-dimensional datasets.Comment: This article is submitted to IEEE TGRS. Hence, this version will be
removed from ArXiv after published in this IEEE journa
Dialectical GAN for SAR Image Translation: From Sentinel-1 to TerraSAR-X
Contrary to optical images, Synthetic Aperture Radar (SAR) images are in
different electromagnetic spectrum where the human visual system is not
accustomed to. Thus, with more and more SAR applications, the demand for
enhanced high-quality SAR images has increased considerably. However,
high-quality SAR images entail high costs due to the limitations of current SAR
devices and their image processing resources. To improve the quality of SAR
images and to reduce the costs of their generation, we propose a Dialectical
Generative Adversarial Network (Dialectical GAN) to generate high-quality SAR
images. This method is based on the analysis of hierarchical SAR information
and the "dialectical" structure of GAN frameworks. As a demonstration, a
typical example will be shown where a low-resolution SAR image (e.g., a
Sentinel-1 image) with large ground coverage is translated into a
high-resolution SAR image (e.g., a TerraSAR-X image). Three traditional
algorithms are compared, and a new algorithm is proposed based on a network
framework by combining conditional WGAN-GP (Wasserstein Generative Adversarial
Network - Gradient Penalty) loss functions and Spatial Gram matrices under the
rule of dialectics. Experimental results show that the SAR image translation
works very well when we compare the results of our proposed method with the
selected traditional methods.Comment: 22 pages, 15 figure
Preparation of Scenarios for the Performance Optimization of a Content-Based Remote Sensing Image Mining System
Recent development in the design of modern satellite ground segments include systems and tools for automated content analysis allowing users to conduct systematic semantic searches within satellite image data archives. The need for such tools becomes more and more pressing as future space-borne imaging sensors will deliver enormous quantities of data that cannot be studied manually. For instance, typical examples from a European perspective are described in [1] and [2]. Within this framework, the European Space Agency (ESA) has started to fund the Earth Observation Librarian (EOLib) project to set up the next generation of image information mining systems [3]. Here we report on the preparation of scenarios that are needed for training and to verify and optimize the performance of such systems
- …