517 research outputs found

    Parallelization of web processing services on cloud computing: A case study of Geostatistical Methods

    Get PDF
    Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.In the last decade the publication of geographic information has increased in Internet, especially with the emergence of new technologies to share information. This information requires the use of technologies of geoprocessing online that use new platforms such as Cloud Computing. This thesis work evaluates the parallelization of geoprocesses on the Cloud platform Amazon Web Service (AWS), through OGC Web Processing Services (WPS) using the 52North WPS framework. This evaluation is performed using a new implementation of a Geostatistical library in Java with parallelization capabilities. The geoprocessing is tested by incrementing the number of micro instances on the Cloud through GridGain technology. The Geostatistical library obtains similar interpolated values compared with the software ArcGIS. In the Inverse Distance Weight (IDW) and Radial Basis Functions (RBF) methods were not found differences. In the Ordinary and Universal Kriging methods differences have been found of 0.01% regarding the Root Mean Square (RMS) error.The parallelization process demonstrates that the duration of the interpolation decreases when the number of nodes increases. The duration behavior depends on the size of input dataset and the number of pixels to be interpolated. The maximum reduction in time was found with the largest configuration used in the research (1.000.000 of pixels and a dataset of 10.000 points). The execution time decreased in 83% working with 10 nodes in the Ordinary Kriging and IDW methods. However, the differences in duration working with 5 nodes and 10 nodes were not statistically significant. The reductions with 5 nodes were 72% and 71% in the Ordinary Kriging and IDW methods respectively. Finally, the experiments show that the geoprocessing on Cloud Computing is feasible using the WPS interface. The performance of the geostatistical methods deployed through the WPS services can improve by the parallelization technique. This thesis proves that the parallelization on the Cloud is viable using a Grid configuration. The evaluation also showed that parallelization of geoprocesses on the Cloud for academic purposes is inexpensive using Amazon AWS platform

    Arquitectura GRID Computing como medio para la democratización e integración de datos LiDAR

    Full text link
    El procesamiento masivo de información obtenida a través de sensores LiDAR (Light Detection and Ranging) excede fácilmente las posibilidades de procesamiento de los ordenadores convencionales. Actualmente, organizaciones públicas y privadas acumulan grandes colecciones de datos derivados de este tipo de sensor sin que los usuarios puedan tener acceso a ellos de manera ágil y eficiente. El elevado coste de las licencias y la complejidad del software necesario para procesar un conjunto de datos derivados de sensores LiDAR, reduce significativamente el número de usuarios con herramientas para su explotación a un número limitado de proveedores. Ante esta perspectiva, han surgido nuevos esfuerzos que se concentran en hacer que esta información sea accesible para cualquier usuario. En este artículo se discuten algunas de las soluciones que sirven de apoyo al procesamiento remoto y la accesibilidad de datos LiDAR mediante el uso del estándar OpenGIS Web Processing Service implementado en una arquitectura GRID Computing. Se identifican los resultados de investigaciones recientes y los avances alcanzados en el marco de las Infraestructuras de Datos Espaciales. Estos trabajos facilitan el tratamiento, distribución y acceso a los datos en cuestión, y son la base para futuros estudios y propuestas locales y regionale

    Procesamiento y accesibilidad de datos LiDAR a través de aplicaciones distribuidas

    Get PDF
    El procesamiento masivo de información obtenida a través de sensores LiDAR (Light Detection and Ranging) excede fácilmente las posibilidades de procesamiento de los ordenadores convencionales. Actualmente, organizaciones públicas y privadas acumulan grandes colecciones de datos derivados de este tipo de sensor sin que los usuarios puedan tener acceso a ellos de manera ágil y eficiente. El elevado coste de las licencias y la complejidad del software necesario para procesar un conjunto de datos derivados de sensores LiDAR, reduce significativamente el número de usuarios con herramientas para su explotación a un número limitado de proveedores. Ante esta perspectiva, han surgido nuevos esfuerzos que se concentran en hacer que esta información sea accesible para cualquier usuario. En este artículo se discuten algunas de las soluciones que sirven de apoyo al procesamiento remoto y la accesibilidad de datos LiDAR mediante el uso del estándar OpenGIS Web Processing Service implementado en una arquitectura GRID Computing. Se identifican los resultados de investigaciones recientes y los avances alcanzados en el marco de las Infraestructuras de Datos Espaciales. Estos trabajos facilitan el tratamiento, distribución y acceso a los datos en cuestión, y son la base para futuros estudios y propuestas locales y regionales

    Development of Distributed Research Center for analysis of regional climatic and environmental changes

    Get PDF
    We present an approach and first results of a collaborative project being carried out by a joint team of researchers from the Institute of Monitoring of Climatic and Ecological Systems, Russia and Earth Systems Research Center UNH, USA. Its main objective is development of a hardware and software platform prototype of a Distributed Research Center (DRC) for monitoring and projecting of regional climatic and environmental changes in the Northern extratropical areas. The DRC should provide the specialists working in climate related sciences and decision-makers with accurate and detailed climatic characteristics for the selected area and reliable and affordable tools for their in-depth statistical analysis and studies of the effects of climate change. Within the framework of the project, new approaches to cloud processing and analysis of large geospatial datasets (big geospatial data) inherent to climate change studies are developed and deployed on technical platforms of both institutions. We discuss here the state of the art in this domain, describe web based information-computational systems developed by the partners, justify the methods chosen to reach the project goal, and briefly list the results obtained so far

    RiBaSE : a pilot for testing the OGC web services integration of water-related information and models

    Get PDF
    The design of an interoperability experiment to demonstrate how current ICT-based tools and water data can work in combination with geospatial web services is presented. This solution is being tested in three transboundary river basins: Scheldt, Maritsa and Severn. The purpose of this experiment is to assess the effectiveness of OGC standards for describing status and dynamics of surface water in river basins, to demonstrate their applicability and finally to increase awareness of emerging hydrological standards as WaterML 2.0. Also, this pilot will help in identifying potential gaps in OGC standards in water domain applications, applied to a flooding scenario in present work

    Geospatial Web Services, Open Standards, and Advances in Interoperability: A Selected, Annotated Bibliography

    Get PDF
    This paper is designed to help GIS librarians and information specialists follow developments in the emerging field of geospatial Web services (GWS). When built using open standards, GWS permits users to dynamically access, exchange, deliver, and process geospatial data and products on the World Wide Web, no matter what platform or protocol is used. Standards/specifications pertaining to geospatial ontologies, geospatial Web services and interoperability are discussed in this bibliography. Finally, a selected, annotated list of bibliographic references by experts in the field is presented

    Large-Scale Data Management and Analysis (LSDMA) - Big Data in Science

    Get PDF

    User's Guide to the MACC-RAD Services on solar energy radiation resources

    No full text
    The European Earth observation programme GMES (Global Monitoring for Environment and Security), now Copernicus (the European Earth Observation Programme) since December 2012, aims at providing environmental information to support policymakers, public authorities and both public and commercial users. A systematic monitoring and forecasting of the state of the Earth's subsystems is currently under development. Six thematic areas are developed: marine, land, atmosphere, emergency, security and climate change. A land monitoring service, a marine monitoring service and an atmosphere monitoring service will contribute directly to the monitoring of climate change and to the assessment of mitigation and adaptation policies. Additional GMES services will address respectively emergency response and security-related aspects. The pre-operational atmosphere service of GMES is currently provided through the FP7 projects MACC and MACC-II (Monitoring Atmospheric Composition and Climate). MACC combines state-of-the-art atmospheric modelling with Earth observation data to provide information services covering European Air Quality, Global Atmospheric Composition, Climate, and UV and Solar Energy. Within the radiation subproject (MACC-RAD) existing historical and daily updated databases for monitoring incoming surface solar irradiance are further developed. The service will meet the needs of European and national policy development and the requirements of (commercial) downstream services (e.g. planning, monitoring, efficiency improvements, integration into energy supply grids). The SOLEMI and the HelioClim 3 databases operated by respectively DLR and ARMINES and its subsidiary Transvalor have been specifically developed in several national, European and ESA projects to fulfil the requirements for long-term databases and NRT services. On its transition process from the precursor services HelioClim and SOLEMI the following User's Guide intends to summarize existing knowledge, which has been published only in a scattered manner. Part A 'Users' Expectations' describes the communities of users, their expectations and gives an overview of the compliance of the MACC RAD service with those. In Part B 'The legacy HelioClim 3 and SOLEMI databases', the current databases HelioClim 3 and SOLEMI as well as the methods used to convert satellite images into solar surface irradiance are presented. The quality of the retrieved irradiances is discussed. An overview of the operations and workflow is presented for the creation, updating and monitoring of these databases. Part C 'The new HelioClim 4 database' describes the new Heliosat 4 method and the new HelioClim 4 database and provides an overview of the operations and the workflow. Part D 'Quality control of estimates of irradiance' discusses the means to control the quality of the elaboration of the products and to assess the uncertainty of the estimates of irradiance. Part E 'Delivering products' is devoted to the supply of HelioClim 4 products. The products are defined. A prototype of a means to access the HelioClim 4 products is presented. It is intended to update this User's Guide regularly following the realisation of the MACC RAD service line

    The Analysis of Open Source Software and Data for Establishment of GIS Services Throughout the Network in a Mapping Organization at National or International Level

    Get PDF
    Federal agencies and their partners collect and manage large amounts of geospatial data but it is often not easily found when needed, and sometimes data is collected or purchased multiple times. In short, the best government data is not always organized and managed efficiently to support decision making in a timely and cost effective manner. National mapping agencies, various Departments responsible for collection of different types of Geospatial data and their authorities cannot, for very long, continue to operate, as they did a few years ago like people living in an island. Leaders need to look at what is now possible that was not possible before, considering capabilities such as cloud computing, crowd sourced data collection, available Open source remotely sensed data and multi source information vital in decision-making as well as new Web-accessible services that provide, sometimes at no cost. Many of these services previously could be obtained only from local GIS experts. These authorities need to consider the available solution and gather information about new capabilities, reconsider agency missions and goals, review and revise policies, make budget and human resource for decisions, and evaluate new products, cloud services, and cloud service providers. To do so, we need, choosing the right tools to rich the above-mentioned goals. As we know, Data collection is the most cost effective part of the mapping and establishment of a Geographic Information system. However, it is not only because of the cost for the data collection task but also because of the damages caused by the delay and the time that takes to provide the user with proper information necessary for making decision from the field up to the user’s hand. In fact, the time consumption of a project for data collection, processing, and presentation of geospatial information has more effect on the cost of a bigger project such as disaster management, construction, city planning, environment, etc. Of course, with such a pre-assumption that we provide all the necessary information from the existing sources directed to user’s computer. The best description for a good GIS project optimization or improvement is finding a methodology to reduce the time and cost, and increase data and service quality (meaning; Accuracy, updateness, completeness, consistency, suitability, information content, integrity, integration capability, and fitness for use as well as user’s specific needs and conditions that must be addressed with a special attention). Every one of the above-mentioned issues must be addressed individually and at the same time, the whole solution must be provided in a global manner considering all the criteria. In this thesis at first, we will discuss about the problem we are facing and what is needed to be done as establishment of National Spatial Data Infra-Structure (NSDI), the definition and related components. Then after, we will be looking for available Open Source Software solutions to cover the whole process to manage; Data collection, Data base management system, data processing and finally data services and presentation. The first distinction among Software is whether they are, Open source and free or commercial and proprietary. It is important to note that in order to make distinction among softwares it is necessary to define a clear specification for this categorization. It is somehow very difficult to distinguish what software belongs to which class from legal point of view and therefore, makes it necessary to clarify what is meant by various terms. With reference to this concept there are 2 global distinctions then, inside each group, we distinguish another classification regarding their functionalities and applications they are made for in GIScience. According to the outcome of the second chapter, which is the technical process for selection of suitable and reliable software according to the characteristics of the users need and required components, we will come to next chapter. In chapter 3, we elaborate in to the details of the GeoNode software as our best candidate tools to take responsibilities of those issues stated before. In Chapter 4, we will discuss the existing Open Source Data globally available with the predefined data quality criteria (Such as theme, data content, scale, licensing, and coverage) according to the metadata statement inside the datasets by mean of bibliographic review, technical documentation and web search engines. We will discuss in chapter 5 further data quality concepts and consequently define sets of protocol for evaluation of all datasets according to the tasks that a mapping organization in general, needed to be responsible to the probable users in different disciplines such as; Reconnaissance, City Planning, Topographic mapping, Transportation, Environment control, disaster management and etc… In Chapter 6, all the data quality assessment and protocols will be implemented into the pre-filtered, proposed datasets. In the final scores and ranking result, each datasets will have a value corresponding to their quality according to the sets of rules that are defined in previous chapter. In last steps, there will be a vector of weight that is derived from the questions that has to be answered by user with reference to the project in hand in order to finalize the most appropriate selection of Free and Open Source Data. This Data quality preference has to be defined by identifying a set of weight vector, and then they have to be applied to the quality matrix in order to get a final quality scores and ranking. At the end of this chapter there will be a section presenting data sets utilization in various projects such as “ Early Impact Analysis” as well as “Extreme Rainfall Detection System (ERDS)- version 2” performed by ITHACA. Finally, in conclusion, the important criteria, as well as future trend in GIS software are discussed and at the end recommendations will be presented
    corecore