451 research outputs found

    The Digital Earth Observation Librarian: A Data Mining Approach for Large Satellite Images Archives

    Get PDF
    Throughout the years, various Earth Observation (EO) satellites have generated huge amounts of data. The extraction of latent information in the data repositories is not a trivial task. New methodologies and tools, being capable of handling the size, complexity and variety of data, are required. Data scientists require support for the data manipulation, labeling and information extraction processes. This paper presents our Earth Observation Image Librarian (EOLib), a modular software framework which offers innovative image data mining capabilities for TerraSAR-X and EO image data, in general. The main goal of EOLib is to reduce the time needed to bring information to end-users from Payload Ground Segments (PGS). EOLib is composed of several modules which offer functionalities such as data ingestion, feature extraction from SAR (Synthetic Aperture Radar) data, meta-data extraction, semantic definition of the image content through machine learning and data mining methods, advanced querying of the image archives based on content, meta-data and semantic categories, as well as 3-D visualization of the processed images. EOLib is operated by DLR’s (German Aerospace Center’s) Multi-Mission Payload Ground Segment of its Remote Sensing Data Center at Oberpfaffenhofen, Germany

    Very-High-Resolution SAR Images and Linked Open Data Analytics Based on Ontologies

    Get PDF
    In this paper, we deal with the integration of multiple sources of information such as Earth observation (EO) synthetic aperture radar (SAR) images and their metadata, semantic descriptors of the image content, as well as other publicly available geospatial data sources expressed as linked open data for posing complex queries in order to support geospatial data analytics. Our approach lays the foundations for the development of richer tools and applications that focus on EO image analytics using ontologies and linked open data. We introduce a system architecture where a common satellite image product is transformed from its initial format into to actionable intelligence information, which includes image descriptors, metadata, image tiles, and semantic labels resulting in an EO-data model. We also create a SAR image ontology based on our EO-data model and a two-level taxonomy classification scheme of the image content. We demonstrate our approach by linking high-resolution TerraSAR-X images with information from CORINE Land Cover (CLC), Urban Atlas (UA), GeoNames, and OpenStreetMap (OSM), which are represented in the standard triple model of the resource description frameworks (RDFs)

    On Feature-Based SAR Image Registration: Appropriate Feature and Retrieval Algorithm

    Get PDF
    An investigation on the appropriate feature and parameter retrieval algorithm is conducted for feature-based registration of synthetic aperture radar (SAR) images. The commonly used features such as tie points, Harris corner, SIFT, and SURF are comprehensively evaluated. SURF is shown to outperform others on criteria such as the geometrical invariance of feature and descriptor, the extraction and matching speed, the localization accuracy, as well as the robustness to decorrelation and speckling. The processing result reveals that SURF has nice flexibility to SAR speckles for the potential relationship between Fast-Hessian detector and refined Lee filter. Moreover, the use of Fast-Hessian to oversampled images with unaltered sampling step helps to improve the registration accuracy to subpixel (i.e., <1 pixel). As for parameter retrieval, the widely used random sample consensus (RANSAC) is inappropriate because it may trap into local occlusion and result in uncertain estimation. An extended fast least trimmed squares (EF-LTS) is proposed, which behaves stable and averagely better than RANSAC. Fitting SURF features with EF-LTS is hence suggested for SAR image registration. The nice performance of this scheme is validated on both InSAR and MiniSAR image pairs

    Automatic vision based fault detection on electricity transmission components using very highresolution

    Get PDF
    Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial TechnologiesElectricity is indispensable to modern-day governments and citizenry’s day-to-day operations. Fault identification is one of the most significant bottlenecks faced by Electricity transmission and distribution utilities in developing countries to deliver credible services to customers and ensure proper asset audit and management for network optimization and load forecasting. This is due to data scarcity, asset inaccessibility and insecurity, ground-surveys complexity, untimeliness, and general human cost. In this context, we exploit the use of oblique drone imagery with a high spatial resolution to monitor four major Electric power transmission network (EPTN) components condition through a fine-tuned deep learning approach, i.e., Convolutional Neural Networks (CNNs). This study explored the capability of the Single Shot Multibox Detector (SSD), a onestage object detection model on the electric transmission power line imagery to localize, classify and inspect faults present. The components fault considered include the broken insulator plate, missing insulator plate, missing knob, and rusty clamp. The adopted network used a CNN based on a multiscale layer feature pyramid network (FPN) using aerial image patches and ground truth to localise and detect faults via a one-phase procedure. The SSD Rest50 architecture variation performed the best with a mean Average Precision of 89.61%. All the developed SSD based models achieve a high precision rate and low recall rate in detecting the faulty components, thus achieving acceptable balance levels F1-score and representation. Finally, comparable to other works of literature within this same domain, deep-learning will boost timeliness of EPTN inspection and their component fault mapping in the long - run if these deep learning architectures are widely understood, adequate training samples exist to represent multiple fault characteristics; and the effects of augmenting available datasets, balancing intra-class heterogeneity, and small-scale datasets are clearly understood

    SEMANTIC INDEXING OF TERRASAR-X AND IN SITU DATA FOR URBAN ANALYTICS

    Get PDF

    Artificial Intelligence Data Science Methodology for Earth Observation

    Get PDF
    This chapter describes a Copernicus Access Platform Intermediate Layers Small-Scale Demonstrator, which is a general platform for the handling, analysis, and interpretation of Earth observation satellite images, mainly exploiting big data of the European Copernicus Programme by artificial intelligence (AI) methods. From 2020, the platform will be applied at a regional and national level to various use cases such as urban expansion, forest health, and natural disasters. Its workflows allow the selection of satellite images from data archives, the extraction of useful information from the metadata, the generation of descriptors for each individual image, the ingestion of image and descriptor data into a common database, the assignment of semantic content labels to image patches, and the possibility to search and to retrieve similar content-related image patches. The main two components, namely, data mining and data fusion, are detailed and validated. The most important contributions of this chapter are the integration of these two components with a Copernicus platform on top of the European DIAS system, for the purpose of large-scale Earth observation image annotation, and the measurement of the clustering and classification performances of various Copernicus Sentinel and third-party mission data. The average classification accuracy is ranging from 80 to 95% depending on the type of images

    Data Mining and Knowledge Discovery tools for exploiting big Earth-Observation data

    Get PDF

    OpenSARUrban: A Sentinel-1 SAR Image Dataset for Urban Interpretation

    Get PDF
    Sentinel-1 mission provides a freely accessible opportunity for urban interpretation from synthetic aperture radar (SAR) images with specific resolution, which is of paramount importance for earth observation. In parallel, with the rapid development of advanced technologies, especially deep learning, it is urgently needed to construct a large-scale SAR dataset leading urban interpretation. This paper presents OpenSARUrban: a Sentinel-1 dataset dedicated to urban interpretation from SAR images, including a well-defined hierarchical annotation scheme, the data collection, the well-established procedures for dataset construction and organizations, the properties, visualizations, and applications of this dataset. Particularly, the OpenSARUrban provides 33358 image patches of SAR urban scene, covering 21 major cities of China, including 10 different categories, 4 kinds of formats, 2 kinds of polarization modes, and owning 5 essential properties: large-scale, diversity, specificity, reliability, and sustainability. These properties guarantee the achievable of several goals for OpenSARUrban. The first is to support urban target characterization. The second is to help develop applicable and advanced algorithms for Sentinel-1 urban target classification. The dataset visualization is implemented from the perspective of manifold to give an intuitive understanding. Besides a detailed description and visualization of the dataset, we present results of some benchmark algorithms, demonstrating that this dataset is practical and challenging. Notably, developing algorithms to enhance the classification performance on the whole dataset and considering the data imbalance are especially challenging

    Binary Patterns Encoded Convolutional Neural Networks for Texture Recognition and Remote Sensing Scene Classification

    Full text link
    Designing discriminative powerful texture features robust to realistic imaging conditions is a challenging computer vision problem with many applications, including material recognition and analysis of satellite or aerial imagery. In the past, most texture description approaches were based on dense orderless statistical distribution of local features. However, most recent approaches to texture recognition and remote sensing scene classification are based on Convolutional Neural Networks (CNNs). The d facto practice when learning these CNN models is to use RGB patches as input with training performed on large amounts of labeled data (ImageNet). In this paper, we show that Binary Patterns encoded CNN models, codenamed TEX-Nets, trained using mapped coded images with explicit texture information provide complementary information to the standard RGB deep models. Additionally, two deep architectures, namely early and late fusion, are investigated to combine the texture and color information. To the best of our knowledge, we are the first to investigate Binary Patterns encoded CNNs and different deep network fusion architectures for texture recognition and remote sensing scene classification. We perform comprehensive experiments on four texture recognition datasets and four remote sensing scene classification benchmarks: UC-Merced with 21 scene categories, WHU-RS19 with 19 scene classes, RSSCN7 with 7 categories and the recently introduced large scale aerial image dataset (AID) with 30 aerial scene types. We demonstrate that TEX-Nets provide complementary information to standard RGB deep model of the same network architecture. Our late fusion TEX-Net architecture always improves the overall performance compared to the standard RGB network on both recognition problems. Our final combination outperforms the state-of-the-art without employing fine-tuning or ensemble of RGB network architectures.Comment: To appear in ISPRS Journal of Photogrammetry and Remote Sensin
    • …
    corecore