32 research outputs found

    Understanding Heterogeneous EO Datasets: A Framework for Semantic Representations

    Get PDF
    Earth observation (EO) has become a valuable source of comprehensive, reliable, and persistent information for a wide number of applications. However, dealing with the complexity of land cover is sometimes difficult, as the variety of EO sensors reflects in the multitude of details recorded in several types of image data. Their properties dictate the category and nature of the perceptible land structures. The data heterogeneity hampers proper understanding, preventing the definition of universal procedures for content exploitation. The main shortcomings are due to the different human and sensor perception on objects, as well as to the lack of coincidence between visual elements and similarities obtained by computation. In order to bridge these sensory and semantic gaps, the paper presents a compound framework for EO image information extraction. The proposed approach acts like a common ground between the user's understanding, who is visually shortsighted to the visible domain, and the machines numerical interpretation of a much wider information. A hierarchical data representation is considered. At first, basic elements are automatically computed. Then, users can enforce their judgement on the data processing results until semantic structures are revealed. This procedure completes a user-machine knowledge transfer. The interaction is formalized as a dialogue, where communication is determined by a set of parameters guiding the computational process at each level of representation. The purpose is to maintain the data-driven observable connected to the level of semantics and to human awareness. The proposed concept offers flexibility and interoperability to users, allowing them to generate those results that best fit their application scenario. The experiments performed on different satellite images demonstrate the ability to increase the performances in case of semantic annotation by adjusting a set of parameters to the particularities of the analyzed data

    Haze and Smoke Removal for Visualization of Multispectral Images: A DNN Physics Aware Architecture

    Get PDF
    Remote sensing multispectral images are extensively used by applications in various fields. The degradation generated by haze or smoke negatively influences the visual analysis of the represented scene. In this paper, a deep neural network based method is proposed to address the visualization improvement of hazy and smoky images. The method is able to entirely exploit the information contained by all spectral bands, especially by the SWIR bands, which are usually not contaminated by haze or smoke. A dimensionality reduction of the spectral signatures or angular signatures is rapidly obtained by using a stacked autoencoders (SAE) trained based on contaminated images only. The latent characteristics obtained by the encoder are mapped to the R - G - B channels for visualization. The haze and smoke removal results of several Sentinel 2 scenes present an increased contrast and show the haze hidden areas from the initial natural color images

    Multispectral Data Analysis for Semantic Assessment-A SNAP Framework for Sentinel-2 Use Case Scenarios

    Get PDF
    Sentinel-2 satellites provide systematic global coverage of land surfaces, measuring physical properties within 13 spectral intervals at a temporal resolution of five days. Computer-based data analysis is highly required to extract similarity by processing and to assist human understanding and semantic annotation in support of mapping Earth's surface. This article proposes a data mining concept that uses advanced data visualization and explainable features to enhance relevant aspects in the Sentinel-2 data and enable semantic analysis. There is a two-stage process. At first, spectral, texture, and physical parameters related features are extracted from the data and included in a learning process that models the data content according to statistical similarities. In parallel, the second processing stage maximizes the data impact on the human visual system to help image understanding and interpretation. Target classes are subject to exploratory visual analysis, such that both visual and latent characteristics are revealed to the user. The concept is further implemented as Sentinel-2 dedicated data analysis (DAS-Tool) plugin for the Sentinel Application Platform and deployed as an open-source tool empowering the Earth observation community with fast and reliable results. Accommodating multiple solutions for each processing phase, the plugin enables flexibility in information extraction and knowledge discovery that will bring the best accuracy in mapping applications. For demonstration purposes, the authors focus on a detailed benchmark against reference data (ground truth) for the Southern region of Romania, then use the selected algorithms in a forest fires scenario analysis for the Sydney region in Australia. The processing involves full-size Sentinel-2 images

    A Validation of ICA Decomposition for PolSAR Images by Using Measures of Normalized Compression Distance

    Get PDF
    Simple color, intensity representations of polarimetric synthetic aperture radar (PolSAR) images fail to show the physical characteristics of the recorded ground objects, so several coherent and incoherent decomposition theorems have been proposed in the state-of-the-art literature. All these decompositions assume the fact that any scattering mechanism can be represented as the sum of some simpler, canonical" scattering mechanisms. Following the same assumption, in this paper we employ the independent component analysis (ICA) for PolSAR images representation. Since ICA is a method used for blind sources separation, we expect that the derived ICA channels represent as well as possible certain types of scattering mechanisms present in the image. ICA decomposition is validated against the coherent Pauli and the incoherent H/a/α decompositions. The normalized compression distance (NCD) is used as a measure of quality of decompositions. Experiments are made on a SLC L-band F-SAR image over Kaufbeuren airfield, Germany.

    Feature Extraction for Patch-Based Classification of Multispectral Earth Observation Images

    Get PDF
    Recently, various patch-based approaches have emerged for high and very high resolution multispectral image classification and indexing. This comes as a consequence of the most important particularity of multispectral data: objects are represented using several spectral bands that equally influence the classification process. In this letter, by using a patch-based approach, we are aiming at extracting descriptors that capture both spectral information and structural information. Using both the raw texture data and the high spectral resolution provided by the latest sensors, we propose enhanced image descriptors based on Gabor, spectral histograms, spectral indices, and bag-of-words framework. This approach leads to a scene classification that outperforms the results obtained when employing the initial image features. Experimental results on a WorldView-2 scene and also on a test collection of tiles created using Sentinel 2 data are presented. A detailed assessment of speed and precision was provided in comparison with state-of-the-art techniques. The broad applicability is guaranteed as the performances obtained for the two selected data sets are comparable, facilitating the exploration of previous and newly lunched satellite missions

    Multi-Modal Change Detection based on Information Theoretical Similarity Measures

    No full text
    The discovery of changes in image time series can be based on information theoretical similarity measures

    Urban Mapping using Satellite Time Series

    Get PDF
    As described by [1], a “Satellite Image Time Series (SITS) is a set of satellite images taken from the same scene at different times. A SITS makes use of different satellite sources to obtain a larger data series with short time interval between two images… Sensors with high spatial and temporal resolutions make the observation of precise spatio-temporal structures in dynamic scenes more accessible. Temporal components integrated with spectral and spatial dimensions allow the identification of complex patterns concerning applications connected with environmental monitoring and analysis of land-cover dynamics.” When we analyse the development of urban areas, it becomes clear that satellite image time series are highly valuable data sources that can be exploited to describe besides vegetation cycles and land use changes - the dynamics of urban settlements and their infrastructure. Typical examples are given in [2] and [3]

    Temporal analysis of SAR imagery for permanent and evolving Earth land cover behavior assessment

    No full text
    In the era of constantly increasing Earth Observation (EO) data collections, information extraction and data analysis should be enhanced with a multi-temporal component enabled by the temporal resolution of satellite missions and create handy, yet powerful tools for those applications involving monitoring of land cover. The image time series, as results of the satellite revisiting period, gives you insights not only on a certain area, but also on its representation at different moments of time. In order to limit the issues that might arise due to irregular time sampling of multispectral data, the authors propose a Synthetic Aperture Radar (SAR) image time series for analysis. To this point, the main goal is to mine the satellite image time series (SITS) for understanding the temporal behaviour of an area in terms of evolution and persistency. The paper introduces an analytical approach, combining coherent and no coherent analysis of SAR SITS content. We propose the Latent Dirichlet Allocation model to extract categories of evolution from the SAR SITS and techniques which study statistical and coherent proprieties of the targets to identify the structures with stable electromagnetic characteristics over time, named Persistent Scatterers (PS). The obtained results indicate an evolutionary character hidden inside the persistent class. The results obtained on 30 ERS images encourages further analysis on Sentinel 1 data

    Deep Learning in Very High Resolution Remote Sensing Image Information Mining Communication Concept

    No full text
    This paper presents the image information mining based on a communication channel concept. The feature extraction algorithms encode the image, while an analysis of topic discovery will decode and send its content to the user in the shape of a semantic map. We consider this approach for a real meaning based semantic annotation of very high resolution remote sensing images. The scene content is described using a multi-level hierarchical information representation. Feature hierarchies are discovered considering that higher levels are formed by combining features from lower level. Such a level to level mapping defines our methodology as a deep learning process. The whole analysis can be divided in two major learning steps. The first one regards the Bayesian inference to extract objects and assign basic semantic to the image. The second step models the spatial interactions between the scene objects based on Latent Dirichlet Allocation, performing a high level semantic annotation. We used a WorldView2 image to exemplify the processing results

    A Scientific Perspective on Big Data in Earth Observation

    No full text
    The areas of Earth Observation (EO) are fast evolving, impacting a broad range of application domains. The new capabilities that appear in the EO domain are broadly augmented by the present technological evolution. This chapter presents how EO data can be transformed in knowledge and amplified through artificial intelligence and cloud computing
    corecore