50 research outputs found

    Very-High-Resolution SAR Images and Linked Open Data Analytics Based on Ontologies

    Get PDF
    In this paper, we deal with the integration of multiple sources of information such as Earth observation (EO) synthetic aperture radar (SAR) images and their metadata, semantic descriptors of the image content, as well as other publicly available geospatial data sources expressed as linked open data for posing complex queries in order to support geospatial data analytics. Our approach lays the foundations for the development of richer tools and applications that focus on EO image analytics using ontologies and linked open data. We introduce a system architecture where a common satellite image product is transformed from its initial format into to actionable intelligence information, which includes image descriptors, metadata, image tiles, and semantic labels resulting in an EO-data model. We also create a SAR image ontology based on our EO-data model and a two-level taxonomy classification scheme of the image content. We demonstrate our approach by linking high-resolution TerraSAR-X images with information from CORINE Land Cover (CLC), Urban Atlas (UA), GeoNames, and OpenStreetMap (OSM), which are represented in the standard triple model of the resource description frameworks (RDFs)

    The Digital Earth Observation Librarian: A Data Mining Approach for Large Satellite Images Archives

    Get PDF
    Throughout the years, various Earth Observation (EO) satellites have generated huge amounts of data. The extraction of latent information in the data repositories is not a trivial task. New methodologies and tools, being capable of handling the size, complexity and variety of data, are required. Data scientists require support for the data manipulation, labeling and information extraction processes. This paper presents our Earth Observation Image Librarian (EOLib), a modular software framework which offers innovative image data mining capabilities for TerraSAR-X and EO image data, in general. The main goal of EOLib is to reduce the time needed to bring information to end-users from Payload Ground Segments (PGS). EOLib is composed of several modules which offer functionalities such as data ingestion, feature extraction from SAR (Synthetic Aperture Radar) data, meta-data extraction, semantic definition of the image content through machine learning and data mining methods, advanced querying of the image archives based on content, meta-data and semantic categories, as well as 3-D visualization of the processed images. EOLib is operated by DLR’s (German Aerospace Center’s) Multi-Mission Payload Ground Segment of its Remote Sensing Data Center at Oberpfaffenhofen, Germany

    SEMANTIC INDEXING OF TERRASAR-X AND IN SITU DATA FOR URBAN ANALYTICS

    Get PDF
    This paper presents the semantic indexing of TerraSAR-X images and in situ data. Image processing together with machine learning methods, relevance feedback techniques, and human expertise are used to annotate the image content into a land use land cover catalogue. All the generated information is stored into a geo-database supporting the link between different types of information and the computation of queries and analytics. We used 11 TerraSAR-X scenes over Germany and LUCAS as in situ data. The semantic index is composed of about 73 land use land cover categories found in TerraSAR-X test dataset and 84 categories found in LUCAS dataset

    Artificial Intelligence Data Science Methodology for Earth Observation

    Get PDF
    This chapter describes a Copernicus Access Platform Intermediate Layers Small-Scale Demonstrator, which is a general platform for the handling, analysis, and interpretation of Earth observation satellite images, mainly exploiting big data of the European Copernicus Programme by artificial intelligence (AI) methods. From 2020, the platform will be applied at a regional and national level to various use cases such as urban expansion, forest health, and natural disasters. Its workflows allow the selection of satellite images from data archives, the extraction of useful information from the metadata, the generation of descriptors for each individual image, the ingestion of image and descriptor data into a common database, the assignment of semantic content labels to image patches, and the possibility to search and to retrieve similar content-related image patches. The main two components, namely, data mining and data fusion, are detailed and validated. The most important contributions of this chapter are the integration of these two components with a Copernicus platform on top of the European DIAS system, for the purpose of large-scale Earth observation image annotation, and the measurement of the clustering and classification performances of various Copernicus Sentinel and third-party mission data. The average classification accuracy is ranging from 80 to 95% depending on the type of images

    Earth Observation Semantics and Data Analytics for Coastal Environmental Areas

    Get PDF
    Current satellite images provide us with detailed information about the state of our planet, as well as about our technical infrastructure and human activities. A range of already existing commercial and scientific applications try to analyze the physical content and meaning of satellite images by exploiting the data of individual, multiple or temporal sequences of images. However, what we still need today are advanced tools to automatically analyze satellite images in order to extract and understand their full content and meaning. To remedy this exploration problem, we outline a highly automated and application-adapted data-mining and content interpretation system consisting of five main components, namely Data Sources (selection and storage of relevant images), Data Model Generation (patch cutting and generation of feature vectors), Database Management System (systematic data storage), Knowledge Discovery in Databases (clustering and content labeling), and Statistical Analytics (generation of classification maps). As test sites, we selected UNESCO-protected areas in Europe that include coastal areas for monitoring and an area known in the Mediterranean Sea that contains fish cages. The analyzed areas are: the Curonian Lagoon in Lithuania and Russia, the Danube Delta in Romania, the Hardangervidda in Norway, and the Wadden Sea in the Netherlands. For these areas, we are providing the results of our image content classification system consisting of image classification maps and additional statistical analytics based on three different use cases. The first use case is the detection of wind turbines vs. boats in the Wadden Sea. The second use case is the identification of fish cages/aquaculture along the Mediterranean coast. Finally, the third use case describes the differences between beaches, dams, dunes, and tidal flats in the Danube Delta, the Wadden Sea, etc. The average classification accuracy that we obtained is ranging from 80% to 95% depending on the type of available images

    Semantic Labelling of Globally Distributed Urban and Non-Urban Satellite Images Using High Resolution SAR Data

    Get PDF
    While the analysis and understanding of multispectral (i.e., optical) remote sensing images has made considerable progress during the last decades, the automated analysis of SAR (Synthetic Aperture Radar) satellite images still needs some innovative techniques to support non-expert users in the handling and interpretation of these big and complex data. In this paper, we present a survey of existing multispectral and SAR land cover image datasets. To this end, we demonstrate how an advanced SAR image analysis system can be designed, implemented, and verified that is capable of generating semantically annotated classification results (e.g., maps) as well as local and regional statistical analytics such as graphical charts. The initial classification is made based on Gabor features and followed by class assignments (labelling). This is followed by the inclusion. This can be accomplished by the inclusion of expert knowledge via active learning with selected examples, and the extraction of additional knowledge from public databases to refine the classification results. Then, based on the generated semantics, we can create new topic models, find typical country-specific phenomena and distributions, visualize them interactively, and present significant examples including confusion matrices. This semi-automated and flexible methodology allows several annotation strategies, the inclusion of dedicated analytics procedures, and can generate broad as well as detailed semantic (multi-)labels for all continents, and statistics or models for selected countries and cities. Here, we employ knowledge graphs and exploit ontologies. These components could already be validated successfully. The proposed methodology can also be adapted to other instruments

    Sextant: Visualizing time-evolving linked geospatial data

    Get PDF
    The linked open data cloud is constantly evolving as datasets get continuously updated with newer versions. As a result, representing, querying, and visualizing the temporal dimension of linked data is crucial. This is especially important for geospatial datasets that form the backbone of large scale open data publication efforts in many sectors of the economy (e.g., the public sector, the Earth Observation sector). Although there has been some work on the representation and querying of linked geospatial data that change over time, to the best of our knowledge, there is currently no tool that offers spatio-temporal visualization of such data. This is in contrast with the existence of many tools for the visualization of the temporal evolution of geospatial data in the GIS area. In this article, we present Sextant, a Web-based system for the visualization and exploration of time-evolving linked geospatial data and the creation, sharing, and collaborative editing of “temporally-enriched” thematic maps which are produced by combining different sources of such data. We present the architecture of Sextant, give examples of its use and present applications in which we have deployed it

    From Copernicus Big Data to Extreme Earth Analytics

    Get PDF
    Copernicus is the European programme for monitoring the Earth. It consists of a set of systems that collect data from satellites and in-situ sensors, process this data and provide users with reliable and up-to-date information on a range of environmental and security issues. The data and information processed and disseminated puts Copernicus at the forefront of the big data paradigm, giving rise to all relevant challenges, the so-called 5 Vs: volume, velocity, variety, veracity and value. In this short paper, we discuss the challenges of extracting information and knowledge from huge archives of Copernicus data. We propose to achieve this by scale-out distributed deep learning techniques that run on very big clusters offering virtual machines and GPUs. We also discuss the challenges of achieving scalability in the management of the extreme volumes of information and knowledge extracted from Copernicus data. The envisioned scientific and technical work will be carried out in the context of the H2020 project ExtremeEarth which starts in January 2019
    corecore