367 research outputs found

    A cloud-based remote sensing data production system

    Get PDF
    The data processing capability of existing remote sensing system has not kept pace with the amount of data typically received and need to be processed. Existing product services are not capable of providing users with a variety of remote sensing data sources for selection, either. Therefore, in this paper, we present a product generation programme using multisource remote sensing data, across distributed data centers in a cloud environment, so as to compensate for the low productive efficiency, less types and simple services of the existing system. The programme adopts “master–slave” architecture. Specifically, the master center is mainly responsible for the production order receiving and parsing, as well as task and data scheduling, results feedback, and so on; the slave centers are the distributed remote sensing data centers, which storage one or more types of remote sensing data, and mainly responsible for production task execution. In general, each production task only runs on one data center, and the data scheduling among centers adopts a “minimum data transferring” strategy. The logical workflow of each production task is organized based on knowledge base, and then turned into the actual executed workflow by Kepler. In addition, the scheduling strategy of each production task mainly depends on the Ganglia monitoring results, thus the computing resources can be allocated or expanded adaptively. Finally, we evaluated the proposed programme using test experiments performed at global, regional and local areas, and the results showed that our proposed cloud-based remote sensing production system could deal with massive remote sensing data and different products generating, as well as on-demand remote sensing computing and information service

    Spatio-temporal research data infrastructure in the context of autonomous driving

    Get PDF
    In this paper, we present an implementation of a research data management system that features structured data storage for spatio-temporal experimental data (environmental perception and navigation in the framework of autonomous driving), including metadata management and interfaces for visualization and parallel processing. The demands of the research environment, the design of the system, the organization of the data storage, and computational hardware as well as structures and processes related to data collection, preparation, annotation, and storage are described in detail. We provide examples for the handling of datasets, explaining the required data preparation steps for data storage as well as benefits when using the data in the context of scientific tasks. © 2020 by the authors

    Global-Scale Resource Survey and Performance Monitoring of Public OGC Web Map Services

    Full text link
    One of the most widely-implemented service standards provided by the Open Geospatial Consortium (OGC) to the user community is the Web Map Service (WMS). WMS is widely employed globally, but there is limited knowledge of the global distribution, adoption status or the service quality of these online WMS resources. To fill this void, we investigated global WMSs resources and performed distributed performance monitoring of these services. This paper explicates a distributed monitoring framework that was used to monitor 46,296 WMSs continuously for over one year and a crawling method to discover these WMSs. We analyzed server locations, provider types, themes, the spatiotemporal coverage of map layers and the service versions for 41,703 valid WMSs. Furthermore, we appraised the stability and performance of basic operations for 1210 selected WMSs (i.e., GetCapabilities and GetMap). We discuss the major reasons for request errors and performance issues, as well as the relationship between service response times and the spatiotemporal distribution of client monitoring sites. This paper will help service providers, end users and developers of standards to grasp the status of global WMS resources, as well as to understand the adoption status of OGC standards. The conclusions drawn in this paper can benefit geospatial resource discovery, service performance evaluation and guide service performance improvements.Comment: 24 pages; 15 figure

    Repairing Landsat Satellite Imagery Using Deep Machine Learning Techniques

    Get PDF
    Satellite Imagery is one of the most widely used sources to analyze geographic features and environments in the world. The data gathered from satellites are used to quantify many vital problems facing our society, such as the impact of natural disasters, shore erosion, rising water levels, and urban growth rates. In this paper, we construct machine learning and deep learning algorithms for repairing anomalies in the Landsat satellite imagery data which arise for various reasons ranging from cloud obstruction to satellite malfunctions. The accuracy of GIS data is crucial to ensuring the models produced from such data are as close to reality as possible. Reducing the inherent bias caused by the obstruction or obfuscation of reflectance values is a simple but effective way to more closely represent the reality of our environment with satellite data. Using clean pixels from previously acquired satellite imagery, we were able to model the bias present in each scene at different times and apply algorithms to fix the inconsistencies. The machine learning model decreased the mean absolute error by an average of 80.1% compared to traditional repair algorithms such as mosaicking

    Multisource Point Clouds, Point Simplification and Surface Reconstruction

    Get PDF
    As data acquisition technology continues to advance, the improvement and upgrade of the algorithms for surface reconstruction are required. In this paper, we utilized multiple terrestrial Light Detection And Ranging (Lidar) systems to acquire point clouds with different levels of complexity, namely dynamic and rigid targets for surface reconstruction. We propose a robust and effective method to obtain simplified and uniform resample points for surface reconstruction. The method was evaluated. A point reduction of up to 99.371% with a standard deviation of 0.2 cm was achieved. In addition, well-known surface reconstruction methods, i.e., Alpha shapes, Screened Poisson reconstruction (SPR), the Crust, and Algebraic point set surfaces (APSS Marching Cubes), were utilized for object reconstruction. We evaluated the benefits in exploiting simplified and uniform points, as well as different density points, for surface reconstruction. These reconstruction methods and their capacities in handling data imperfections were analyzed and discussed. The findings are that (i) the capacity of surface reconstruction in dealing with diverse objects needs to be improved; (ii) when the number of points reaches the level of millions (e.g., approximately five million points in our data), point simplification is necessary, as otherwise, the reconstruction methods might fail; (iii) for some reconstruction methods, the number of input points is proportional to the number of output meshes; but a few methods are in the opposite; (iv) all reconstruction methods are beneficial from the reduction of running time; and (v) a balance between the geometric details and the level of smoothing is needed. Some methods produce detailed and accurate geometry, but their capacity to deal with data imperfection is poor, while some other methods exhibit the opposite characteristics

    Towards intelligent geo-database support for earth system observation: Improving the preparation and analysis of big spatio-temporal raster data

    Get PDF
    The European COPERNICUS program provides an unprecedented breakthrough in the broad use and application of satellite remote sensing data. Maintained on a sustainable basis, the COPERNICUS system is operated on a free-and-open data policy. Its guaranteed availability in the long term attracts a broader community to remote sensing applications. In general, the increasing amount of satellite remote sensing data opens the door to the diverse and advanced analysis of this data for earth system science. However, the preparation of the data for dedicated processing is still inefficient as it requires time-consuming operator interaction based on advanced technical skills. Thus, the involved scientists have to spend significant parts of the available project budget rather on data preparation than on science. In addition, the analysis of the rich content of the remote sensing data requires new concepts for better extraction of promising structures and signals as an effective basis for further analysis. In this paper we propose approaches to improve the preparation of satellite remote sensing data by a geo-database. Thus the time needed and the errors possibly introduced by human interaction are minimized. In addition, it is recommended to improve data quality and the analysis of the data by incorporating Artificial Intelligence methods. A use case for data preparation and analysis is presented for earth surface deformation analysis in the Upper Rhine Valley, Germany, based on Persistent Scatterer Interferometric Synthetic Aperture Radar data. Finally, we give an outlook on our future research

    Living Earth:Implementing national standardised land cover classification systems for Earth Observation in support of sustainable development

    Get PDF
    Earth Observation (EO) has been recognised as a key data source for supporting the United Nations Sustainable Development Goals (SDGs). Advances in data availability and analytical capabilities have provided a wide range of users access to global coverage analysis-ready data (ARD). However, ARD does not provide the information required by national agencies tasked with coordinating the implementation of SDGs. Reliable, standardised, scalable mapping of land cover and its change over time and space facilitates informed decision making, providing cohesive methods for target setting and reporting of SDGs. The aim of this study was to implement a global framework for classifying land cover. The Food and Agriculture Organisation’s Land Cover Classification System (FAO LCCS) provides a global land cover taxonomy suitable to comprehensively support SDG target setting and reporting. We present a fully implemented FAO LCCS optimised for EO data; Living Earth, an open-source software package that can be readily applied using existing national EO infrastructure and satellite data. We resolve several semantic challenges of LCCS for consistent EO implementation, including modifications to environmental descriptors, inter-dependency within the modular-hierarchical framework, and increased flexibility associated with limited data availability. To ensure easy adoption of Living Earth for SDG reporting, we identified key environmental descriptors to provide resource allocation recommendations for generating routinely retrieved input parameters. Living Earth provides an optimal platform for global adoption of EO4SDGs ensuring a transparent methodology that allows monitoring to be standardised for all countrie

    Web technologies for environmental big data

    Get PDF
    Recent evolutions in computing science and web technology provide the environmental community with continuously expanding resources for data collection and analysis that pose unprecedented challenges to the design of analysis methods, workflows, and interaction with data sets. In the light of the recent UK Research Council funded Environmental Virtual Observatory pilot project, this paper gives an overview of currently available implementations related to web-based technologies for processing large and heterogeneous datasets and discuss their relevance within the context of environmental data processing, simulation and prediction. We found that, the processing of the simple datasets used in the pilot proved to be relatively straightforward using a combination of R, RPy2, PyWPS and PostgreSQL. However, the use of NoSQL databases and more versatile frameworks such as OGC standard based implementations may provide a wider and more flexible set of features that particularly facilitate working with larger volumes and more heterogeneous data sources
    • …
    corecore