22 research outputs found

    Iz stranih časopisa

    Get PDF
    U tekstu je dan popis radova koji su objavljeni u stranim časopisima

    A geographical information system for wild fire management

    Get PDF
    he CROSS-Fire project focus on developing a grid-based framework for wild fire management using FireStation (FS) as a standalone application that simulates the fire spread over complex topography. The overall software development is made of several components: client applications, which request geo-referenced data and fire spread simulation, Spatial Data Infrastructures (SDI), which provide geo-referenced data, and the GRID, which gives support to the computational and data storage requirements. Herein we present the central WPS (Web Processing System) layer developed to support the interaction between all components of the architecture. This OGC-WS compatible layer provides the mechanism to access the grid facilities for processing and data management and including all the algorithms, calculation, or model that operates on spatially referenced data, also mediating communication with the FS console. We also describe the work that has been done to provide FS with dynamic fuel maps, by using an OGC-WCS suite of services and satellite data. This task complements the previous integration of dynamic data from meteorological stations using OGC-SWE services

    An OGC/SOS conformant client to manage geospatial data on the GRID

    Get PDF
    This paper describes a Sensor Observation Service (SOS) client developed to integrate dynamic geospatial data from meteorological sensors, on a grid-based risk management decision support system. The present work is part of the CROSS-Fire project, which uses forest fires as the main case study and the FireStation application to simulate fire spread. The meteorological data is accessed through the SOS standard from Open Geospatial Consortium (OGC), using the Observations and Measurements (O&M) standard encoding format. Since the SOS standard was not designed to directly access sensors, we developed an interface application to load the SOS database with observations from a Vantis Weather Station (WS). To integrate the SOS meteorological data into the FireStation, the developed SOS client was embedded on a Web Processing Service (WPS) algorithm. This algorithm was designed to be functional and fully compliant with SOS, SensorML, and O&M standards from OGC. With minor modifications to the developed SOS database interface, the SOS client works with any WS. This client supports spatial and temporal filters, including the integration of dynamic data from satellites into FireStation, as described.Fundação para a Ciência e a Tecnologia (FCT

    HPC-oriented Canonical Workflows for Machine Learning Applications in Climate and Weather Prediction

    Get PDF
    Machine learning (ML) applications in weather and climate are gaining momentum as big data and the immense increase in High-performance computing (HPC) power are paving the way. Ensuring FAIR data and reproducible ML practices are significant challenges for Earth system researchers. Even though the FAIR principle is well known to many scientists, research communities are slow to adopt them. Canonical Workflow Framework for Research (CWFR) provides a platform to ensure the FAIRness and reproducibility of these practices without overwhelming researchers. This conceptual paper envisions a holistic CWFR approach towards ML applications in weather and climate, focusing on HPC and big data. Specifically, we discuss Fair Digital Object (FDO) and Research Object (RO) in the DeepRain project to achieve granular reproducibility. DeepRain is a project that aims to improve precipitation forecast in Germany by using ML. Our concept envisages the raster datacube to provide data harmonization and fast and scalable data access. We suggest the Juypter notebook as a single reproducible experiment. In addition, we envision JuypterHub as a scalable and distributed central platform that connects all these elements and the HPC resources to the researchers via an easy-to-use graphical interface

    Towards intelligent geo-database support for earth system observation: Improving the preparation and analysis of big spatio-temporal raster data

    Get PDF
    The European COPERNICUS program provides an unprecedented breakthrough in the broad use and application of satellite remote sensing data. Maintained on a sustainable basis, the COPERNICUS system is operated on a free-and-open data policy. Its guaranteed availability in the long term attracts a broader community to remote sensing applications. In general, the increasing amount of satellite remote sensing data opens the door to the diverse and advanced analysis of this data for earth system science. However, the preparation of the data for dedicated processing is still inefficient as it requires time-consuming operator interaction based on advanced technical skills. Thus, the involved scientists have to spend significant parts of the available project budget rather on data preparation than on science. In addition, the analysis of the rich content of the remote sensing data requires new concepts for better extraction of promising structures and signals as an effective basis for further analysis. In this paper we propose approaches to improve the preparation of satellite remote sensing data by a geo-database. Thus the time needed and the errors possibly introduced by human interaction are minimized. In addition, it is recommended to improve data quality and the analysis of the data by incorporating Artificial Intelligence methods. A use case for data preparation and analysis is presented for earth surface deformation analysis in the Upper Rhine Valley, Germany, based on Persistent Scatterer Interferometric Synthetic Aperture Radar data. Finally, we give an outlook on our future research

    BigDataCube: Making Big Data a Commodity

    Get PDF
    The Big DataCube project aims at advancing the innovative datacube paradigm - i.e ., analysis - ready spatio - temporal raster data - from the level of a scientific prototype to precommercial Earth Observation (EO) services. To this end, the European Datacube Engine (in database lingo: ‖Array Database System ‖), rasdaman, will be installed on the public German Copernicus hub, CODE - DE , as well as in a comme rc ial c loud environment to exe mp larily offer analyt ics services and to federate both, thereby demonstrating an integrated public/private service . Started in January 2018 with a runtime of 18 months, Big DataCube will co mple ment the batch - orien ted Hadoop service already available on CODE - DE with rasdaman thereby offering important additional functionality, in particular a paradig m of -any query, any t ime, on any size‖, strictly based on open geo standards and federated with other data centers. On this platform novel, specialized services can be established by third parties in a fast, fle xible, and scalable manner. To this end, several features crucial for operational services need to be tested and/or imple mented, such as securing access (in parti cular in a distributed processing context), tuning to the specific cloud configuration of CODE - DE, and further items to be determined in the initial requirements analysis phase. The result will be the prototype of a federation of rasdaman installations on CODE-DE, cloudeo, as we ll as further (external) data centers; further, best practices on the use of array databases in operational environments will be established. This will pave the way for individual value - adding services by third parties

    Geospatial web services pave new ways for server-based on-demand access and processing of Big Earth Data

    Get PDF
    Big Earth Data has experienced a considerable increase in volume in recent years due to improved sensing technologies and improvement of numerical-weather prediction models. The traditional geospatial data analysis workflow hinders the use of large volumes of geospatial data due to limited disc space and computing capacity. Geospatial web service technologies bring new opportunities to access large volumes of Big Earth Data via the Internet and to process them at server-side. Four practical examples are presented from the marine, climate, planetary and earth observation science communities to show how the standard interface Web Coverage Service and its processing extension can be integrated into the traditional geospatial data workflow.Web service technologies offer a time- and cost-effective way to access multi-dimensional data in a user-tailored format and allow for rapid application development or time-series extraction. Data transport is minimised and enhanced processing capabilities are offered. More research is required to investigate web service implementations in an operational mode and large data centres have to become more progressive towards the adoption of geo-data standard interfaces. At the same time, data users have to become aware of the advantages of web services and be trained how to benefit from them most

    Big Data Analytics for Earth Sciences: the EarthServer approach

    Get PDF
    Big Data Analytics is an emerging field since massive storage and computing capabilities have been made available by advanced e-infrastructures. Earth and Environmental sciences are likely to benefit from Big Data Analytics techniques supporting the processing of the large number of Earth Observation datasets currently acquired and generated through observations and simulations. However, Earth Science data and applications present specificities in terms of relevance of the geospatial information, wide heterogeneity of data models and formats, and complexity of processing. Therefore, Big Earth Data Analytics requires specifically tailored techniques and tools. The EarthServer Big Earth Data Analytics engine offers a solution for coverage-type datasets, built around a high performance array database technology, and the adoption and enhancement of standards for service interaction (OGC WCS and WCPS). The EarthServer solution, led by the collection of requirements from scientific communities and international initiatives, provides a holistic approach that ranges from query languages and scalability up to mobile access and visualization. The result is demonstrated and validated through the development of lighthouse applications in the Marine, Geology, Atmospheric, Planetary and Cryospheric science domains

    Load testing of HELIDEM geo-portal: an OGC open standards interoperability example integrating WMS, WFS, WCS and WPS

    Get PDF
    This paper presents a load testing of the HELIDEM geo-portal, which is an example of interoperability between a number of standard geospatial services as defined by the Open Geospatial Consortium. The portal was developed within the European project HELIDEM (www.helidem.eu) with the aim of valorizing the main project output which is a cross-border digital terrain model. The portal aims at fostering its diffusion and usage trough easily accessible tools. The DTM covers the alpine area located between Southern Switzerland (Canton Ticino) and Northern Italy (Lombardy and Piedmont Regions). From a technological point of view, the server-side component of the portal is based on a Service Oriented Architecture implemented using the open source software ZOO-Project, GRASS GIS and Geoserver; the client-side component is a Web interface based on CSS3 and HTML5 trough the usage of the ExtJS framework and the OpenLayers software. The presented solution is a mix of technologies and software, some of which are considered, within the open source for geospatial community, mature and robust while others are considered promising but not sufficiently tested yet. For this reason this research conducted a load test over concurrent users in order to verify the robustness, quality and performance of the system and to identify eventual bottlenecks. Test results didn’t register any runtime exception confirming the good quality of the implemented system and underlying software. Nevertheless, performance and response time exponentially degrades with increasing number of concurrent users, area of analysis and process complexity. Finally, the test confirms that the implemented solution is robust, in fact no system failure was recorded during the analysis
    corecore