10 research outputs found

    Knowledge extraction from Copernicus satellite data

    Get PDF
    We describe two alternative approaches of how to extract knowledge from high- and medium-resolution Synthetic Aperture Radar (SAR) images of the European Sentinel-1 satellites. To this end, we selected two basic types of images, namely images depicting arctic shipping routes with icebergs, and - in contrast - coastal areas with various types of land use and human-made facilities. In both cases, the extracted knowledge is delivered as (semantic) categories (i.e., local content labels) of adjacent image patches from big SAR images. Then, machine learning strategies helped us design and validate two automated knowledge extraction systems that can be extended for the understanding of multispectral satellite images

    Earth Observation Data Mining: A Use Case for Forest Monitoring

    Get PDF
    The increased number of free and open satellite images has led to new applications of these data. Among them is the systematic classification of land cover/use types based on patterns of settlements or agriculture recorded by satellite imagers, in particular, the identification and quantification of temporal changes. In this paper, we will present guidelines and practical examples of how to obtain reliable image patch classification results based on data mining techniques for detecting possible changes that can appear within a data set. Here, we will focus on a scenario, namely forest monitoring using Earth observation Synthetic Aperture Radar data acquired by Sentinel-1, and multispectral data acquired by Sentinel-2

    Deep Learning Training and Benchmarks for Earth Observation Images: Data Sets, Features, and Procedures

    Get PDF
    Deep learning methods are often used for image classification or local object segmentation. The corresponding test and validation data sets are an integral part of the learning process and also of the algorithm performance evaluation. High and particularly very high-resolution Earth observation (EO) applications based on satellite images primarily aim at the semantic labeling of land cover structures or objects as well as of temporal evolution classes. However, one of the main EO objectives is physical parameter retrievals such as temperatures, precipitation, and crop yield predictions. Therefore, we need reliably labeled data sets and tools to train the developed algorithms and to assess the performance of our deep learning paradigms. Generally, imaging sensors generate a visually understandable representation of the observed scene. However, this does not hold for many EO images, where the recorded images only depict a spectral subset of the scattered light field, thus generating an indirect signature of the imaged object. This spots the load of EO image understanding, as a new and particular challenge of Machine Learning (ML) and Artificial Intelligence (AI). This chapter reviews and analyses the new approaches of EO imaging leveraging the recent advances in physical process-based ML and AI methods and signal processing

    Artificial Intelligence Data Science Methodology for Earth Observation

    Get PDF
    This chapter describes a Copernicus Access Platform Intermediate Layers Small-Scale Demonstrator, which is a general platform for the handling, analysis, and interpretation of Earth observation satellite images, mainly exploiting big data of the European Copernicus Programme by artificial intelligence (AI) methods. From 2020, the platform will be applied at a regional and national level to various use cases such as urban expansion, forest health, and natural disasters. Its workflows allow the selection of satellite images from data archives, the extraction of useful information from the metadata, the generation of descriptors for each individual image, the ingestion of image and descriptor data into a common database, the assignment of semantic content labels to image patches, and the possibility to search and to retrieve similar content-related image patches. The main two components, namely, data mining and data fusion, are detailed and validated. The most important contributions of this chapter are the integration of these two components with a Copernicus platform on top of the European DIAS system, for the purpose of large-scale Earth observation image annotation, and the measurement of the clustering and classification performances of various Copernicus Sentinel and third-party mission data. The average classification accuracy is ranging from 80 to 95% depending on the type of images

    Earth Observation Semantics and Data Analytics for Coastal Environmental Areas

    Get PDF
    Current satellite images provide us with detailed information about the state of our planet, as well as about our technical infrastructure and human activities. A range of already existing commercial and scientific applications try to analyze the physical content and meaning of satellite images by exploiting the data of individual, multiple or temporal sequences of images. However, what we still need today are advanced tools to automatically analyze satellite images in order to extract and understand their full content and meaning. To remedy this exploration problem, we outline a highly automated and application-adapted data-mining and content interpretation system consisting of five main components, namely Data Sources (selection and storage of relevant images), Data Model Generation (patch cutting and generation of feature vectors), Database Management System (systematic data storage), Knowledge Discovery in Databases (clustering and content labeling), and Statistical Analytics (generation of classification maps). As test sites, we selected UNESCO-protected areas in Europe that include coastal areas for monitoring and an area known in the Mediterranean Sea that contains fish cages. The analyzed areas are: the Curonian Lagoon in Lithuania and Russia, the Danube Delta in Romania, the Hardangervidda in Norway, and the Wadden Sea in the Netherlands. For these areas, we are providing the results of our image content classification system consisting of image classification maps and additional statistical analytics based on three different use cases. The first use case is the detection of wind turbines vs. boats in the Wadden Sea. The second use case is the identification of fish cages/aquaculture along the Mediterranean coast. Finally, the third use case describes the differences between beaches, dams, dunes, and tidal flats in the Danube Delta, the Wadden Sea, etc. The average classification accuracy that we obtained is ranging from 80% to 95% depending on the type of available images

    Understanding satellite images: a data mining module for Sentinel images

    Get PDF
    The increased number of free and open Sentinel satellite images has led to new applications of these data. Among them is the systematic classification of land cover/use types based on patterns of settlements or agriculture recorded by these images, in particular, the identification and quantification of their temporal changes. In this paper, we will present guidelines and practical examples of how to obtain rapid and reliable image patch labelling results and their validation based on data mining techniques for detecting these temporal changes, and presenting these as classification maps and/or statistical analytics. This represents a new systematic validation approach for semantic image content verification. We will focus on a number of different scenarios proposed by the user community using Sentinel data. From a large number of potential use cases, we selected three main cases, namely forest monitoring, flood monitoring, and macro-economics/urban monitoring

    SAR Image Land Cover Datasets for Classification Benchmarking of Temporal Changes

    No full text
    The increased availability of high-resolution SAR (Synthetic Aperture Radar) satellite images has led to new civil applications of these data. Among them is the systematic classification of land cover types based on patterns of settlements or agriculture recorded by SAR imagers, in particular the identification and quantification of temporal changes. A systematic (re-)classification shall allow the assignment of continuously updated semantic content labels to local image patches. This necessitates a careful selection of well-defined and discernible categories being contained in the image data that have to be trained and validated. These steps are well-established for optical images, while the peculiar imaging characteristics of SAR sensors often prevent a comparable approach. Especially, the vast range of SAR imaging parameters and the diversity of local targets impact the image product characteristics and need special care. In the following, we present guidelines and practical examples of how to obtain reliable image patch classification results for time series data with a limited number of given training data. We demonstrate that one can avoid the generation of simulated training data if we decompose the classification task into physically meaningful subsets of characteristic target properties and important imaging parameters. Then the results obtained during training can serve as benchmarking figures for subsequent image classification. This holds for typical remote sensing examples such as coastal monitoring or the characterization of urban areas where we want to understand the transitions between individual land cover categories. For this purpose, a representative dataset can be obtained from the authors. A final proof of our concept is the comparison of classification results of selected target areas obtained by rather different SAR instruments. Despite the instrumental differences, the final results are surprisingly similar

    Semantic Labelling of Globally Distributed Urban and Non-Urban Satellite Images Using High Resolution SAR Data

    Get PDF
    While the analysis and understanding of multispectral (i.e., optical) remote sensing images has made considerable progress during the last decades, the automated analysis of SAR (Synthetic Aperture Radar) satellite images still needs some innovative techniques to support non-expert users in the handling and interpretation of these big and complex data. In this paper, we present a survey of existing multispectral and SAR land cover image datasets. To this end, we demonstrate how an advanced SAR image analysis system can be designed, implemented, and verified that is capable of generating semantically annotated classification results (e.g., maps) as well as local and regional statistical analytics such as graphical charts. The initial classification is made based on Gabor features and followed by class assignments (labelling). This is followed by the inclusion. This can be accomplished by the inclusion of expert knowledge via active learning with selected examples, and the extraction of additional knowledge from public databases to refine the classification results. Then, based on the generated semantics, we can create new topic models, find typical country-specific phenomena and distributions, visualize them interactively, and present significant examples including confusion matrices. This semi-automated and flexible methodology allows several annotation strategies, the inclusion of dedicated analytics procedures, and can generate broad as well as detailed semantic (multi-)labels for all continents, and statistics or models for selected countries and cities. Here, we employ knowledge graphs and exploit ontologies. These components could already be validated successfully. The proposed methodology can also be adapted to other instruments
    corecore