902 research outputs found

    Very-High-Resolution SAR Images and Linked Open Data Analytics Based on Ontologies

    Get PDF
    In this paper, we deal with the integration of multiple sources of information such as Earth observation (EO) synthetic aperture radar (SAR) images and their metadata, semantic descriptors of the image content, as well as other publicly available geospatial data sources expressed as linked open data for posing complex queries in order to support geospatial data analytics. Our approach lays the foundations for the development of richer tools and applications that focus on EO image analytics using ontologies and linked open data. We introduce a system architecture where a common satellite image product is transformed from its initial format into to actionable intelligence information, which includes image descriptors, metadata, image tiles, and semantic labels resulting in an EO-data model. We also create a SAR image ontology based on our EO-data model and a two-level taxonomy classification scheme of the image content. We demonstrate our approach by linking high-resolution TerraSAR-X images with information from CORINE Land Cover (CLC), Urban Atlas (UA), GeoNames, and OpenStreetMap (OSM), which are represented in the standard triple model of the resource description frameworks (RDFs)

    Classification of Large-Scale High-Resolution SAR Images with Deep Transfer Learning

    Get PDF
    The classification of large-scale high-resolution SAR land cover images acquired by satellites is a challenging task, facing several difficulties such as semantic annotation with expertise, changing data characteristics due to varying imaging parameters or regional target area differences, and complex scattering mechanisms being different from optical imaging. Given a large-scale SAR land cover dataset collected from TerraSAR-X images with a hierarchical three-level annotation of 150 categories and comprising more than 100,000 patches, three main challenges in automatically interpreting SAR images of highly imbalanced classes, geographic diversity, and label noise are addressed. In this letter, a deep transfer learning method is proposed based on a similarly annotated optical land cover dataset (NWPU-RESISC45). Besides, a top-2 smooth loss function with cost-sensitive parameters was introduced to tackle the label noise and imbalanced classes' problems. The proposed method shows high efficiency in transferring information from a similarly annotated remote sensing dataset, a robust performance on highly imbalanced classes, and is alleviating the over-fitting problem caused by label noise. What's more, the learned deep model has a good generalization for other SAR-specific tasks, such as MSTAR target recognition with a state-of-the-art classification accuracy of 99.46%

    Analysis of Coastal Areas Using SAR Images: A Case Study of the Dutch Wadden Sea Region

    Get PDF
    The increased availability of civil synthetic aperture radar (SAR) satellite images with different resolution allows us to compare the imaging capabilities of these instruments, to assess the quality of the available data and to investigate different areas (e.g., the Wadden Sea region). In our investigation, we propose to explore the content of TerraSAR-X and Sentinel-1A satellite images via a data mining approach in which the main steps are patch tiling, feature extraction, classification, semantic annotation and visual-statistical analytics. Once all the extracted categories are mapped and quantified, then the next step is to interpret them from an environmental point of view. The objective of our study is the application of semi-automated SAR image interpretation. Its novelty is the automated multiclass categorisation of coastal areas. We found out that the north-west of the Netherlands can be interpreted routinely as land surfaces by our satellite image analyses, while for the Wadden Sea, we can discriminate the different water levels and their impact on the visibility of the tidal flats. This necessitates a selection of time series data spanning a full tidal cycle

    Improving knowledge discovery from synthetic aperture radar images using the linked open data cloud and Sextant

    Get PDF
    In the last few years, thanks to projects like TELEIOS, the linked open data cloud has been rapidly populated with geospatial data some of it describing Earth Observation products (e.g., CORINE Land Cover, Urban Atlas). The abundance of this data can prove very useful to the new missions (e.g., Sentinels) as a means to increase the usability of the millions of images and EO products that are expected to be produced by these missions. In this paper, we explain the relevant opportunities by demonstrating how the process of knowledge discovery from TerraSAR-X images can be improved using linked open data and Sextant, a tool for browsing and exploration of linked geospatial data, as well as the creation of thematic maps

    Artificial Intelligence Data Science Methodology for Earth Observation

    Get PDF
    This chapter describes a Copernicus Access Platform Intermediate Layers Small-Scale Demonstrator, which is a general platform for the handling, analysis, and interpretation of Earth observation satellite images, mainly exploiting big data of the European Copernicus Programme by artificial intelligence (AI) methods. From 2020, the platform will be applied at a regional and national level to various use cases such as urban expansion, forest health, and natural disasters. Its workflows allow the selection of satellite images from data archives, the extraction of useful information from the metadata, the generation of descriptors for each individual image, the ingestion of image and descriptor data into a common database, the assignment of semantic content labels to image patches, and the possibility to search and to retrieve similar content-related image patches. The main two components, namely, data mining and data fusion, are detailed and validated. The most important contributions of this chapter are the integration of these two components with a Copernicus platform on top of the European DIAS system, for the purpose of large-scale Earth observation image annotation, and the measurement of the clustering and classification performances of various Copernicus Sentinel and third-party mission data. The average classification accuracy is ranging from 80 to 95% depending on the type of images

    The Digital Earth Observation Librarian: A Data Mining Approach for Large Satellite Images Archives

    Get PDF
    Throughout the years, various Earth Observation (EO) satellites have generated huge amounts of data. The extraction of latent information in the data repositories is not a trivial task. New methodologies and tools, being capable of handling the size, complexity and variety of data, are required. Data scientists require support for the data manipulation, labeling and information extraction processes. This paper presents our Earth Observation Image Librarian (EOLib), a modular software framework which offers innovative image data mining capabilities for TerraSAR-X and EO image data, in general. The main goal of EOLib is to reduce the time needed to bring information to end-users from Payload Ground Segments (PGS). EOLib is composed of several modules which offer functionalities such as data ingestion, feature extraction from SAR (Synthetic Aperture Radar) data, meta-data extraction, semantic definition of the image content through machine learning and data mining methods, advanced querying of the image archives based on content, meta-data and semantic categories, as well as 3-D visualization of the processed images. EOLib is operated by DLR’s (German Aerospace Center’s) Multi-Mission Payload Ground Segment of its Remote Sensing Data Center at Oberpfaffenhofen, Germany

    Deep Learning Training and Benchmarks for Earth Observation Images: Data Sets, Features, and Procedures

    Get PDF
    Deep learning methods are often used for image classification or local object segmentation. The corresponding test and validation data sets are an integral part of the learning process and also of the algorithm performance evaluation. High and particularly very high-resolution Earth observation (EO) applications based on satellite images primarily aim at the semantic labeling of land cover structures or objects as well as of temporal evolution classes. However, one of the main EO objectives is physical parameter retrievals such as temperatures, precipitation, and crop yield predictions. Therefore, we need reliably labeled data sets and tools to train the developed algorithms and to assess the performance of our deep learning paradigms. Generally, imaging sensors generate a visually understandable representation of the observed scene. However, this does not hold for many EO images, where the recorded images only depict a spectral subset of the scattered light field, thus generating an indirect signature of the imaged object. This spots the load of EO image understanding, as a new and particular challenge of Machine Learning (ML) and Artificial Intelligence (AI). This chapter reviews and analyses the new approaches of EO imaging leveraging the recent advances in physical process-based ML and AI methods and signal processing

    DeepAqua: Self-Supervised Semantic Segmentation of Wetland Surface Water Extent with SAR Images using Knowledge Distillation

    Full text link
    Deep learning and remote sensing techniques have significantly advanced water monitoring abilities; however, the need for annotated data remains a challenge. This is particularly problematic in wetland detection, where water extent varies over time and space, demanding multiple annotations for the same area. In this paper, we present DeepAqua, a self-supervised deep learning model that leverages knowledge distillation (a.k.a. teacher-student model) to eliminate the need for manual annotations during the training phase. We utilize the Normalized Difference Water Index (NDWI) as a teacher model to train a Convolutional Neural Network (CNN) for segmenting water from Synthetic Aperture Radar (SAR) images, and to train the student model, we exploit cases where optical- and radar-based water masks coincide, enabling the detection of both open and vegetated water surfaces. DeepAqua represents a significant advancement in computer vision techniques by effectively training semantic segmentation models without any manually annotated data. Experimental results show that DeepAqua outperforms other unsupervised methods by improving accuracy by 7%, Intersection Over Union by 27%, and F1 score by 14%. This approach offers a practical solution for monitoring wetland water extent changes without needing ground truth data, making it highly adaptable and scalable for wetland conservation efforts.Comment: 29 pages, 8 figures, 1 tabl
    corecore