9 research outputs found

    Building the IDECi-UIB: the scientific spatial data infrastructure node for the Balearic Islands University

    Get PDF
    Technical and methodological enhancements in Information Technologies (IT) and Geographical Information Systems (GIS) has permitted the growth in Spatial Data Infrastructures (SDI) performance. In this way, their uses and applications have grown very rapidly. In the scientific and educational working fields, different institutions and organisations have bet for its use enforcing information exchange that allows researchers to improve their studies as well as give a better dissemination within the scientific community. Therefore, the GIS and Remote Sensing Service (SSIGT) at the Balearic Islands University (UIB) has decided to build and launch its own SDI to serve scientific Geo-Information (GI) throughout the Balearic Islands society focussing on the university community. By these means it intends to boost the development of research and education focusing on the field of spatial information. This article tries to explain the background ideas that form the basic concept of the scientific SDI related to the concepts of e-Science and e-Research. Finally, it explains how these ideas are taken into practice into the new University Scientific SDI

    AGENT- AND CLOUD-SUPPORTED GEOSPATIAL SERVICE AGGREGATION FOR FLOOD RESPONSE

    Get PDF

    A Geospatial Cyberinfrastructure for Urban Economic Analysis and Spatial Decision-Making

    Get PDF
    abstract: Urban economic modeling and effective spatial planning are critical tools towards achieving urban sustainability. However, in practice, many technical obstacles, such as information islands, poor documentation of data and lack of software platforms to facilitate virtual collaboration, are challenging the effectiveness of decision-making processes. In this paper, we report on our efforts to design and develop a geospatial cyberinfrastructure (GCI) for urban economic analysis and simulation. This GCI provides an operational graphic user interface, built upon a service-oriented architecture to allow (1) widespread sharing and seamless integration of distributed geospatial data; (2) an effective way to address the uncertainty and positional errors encountered in fusing data from diverse sources; (3) the decomposition of complex planning questions into atomic spatial analysis tasks and the generation of a web service chain to tackle such complex problems; and (4) capturing and representing provenance of geospatial data to trace its flow in the modeling task. The Greater Los Angeles Region serves as the test bed. We expect this work to contribute to effective spatial policy analysis and decision-making through the adoption of advanced GCI and to broaden the application coverage of GCI to include urban economic simulations

    A provenance metadata model integrating ISO geospatial lineage and the OGC WPS : conceptual model and implementation

    Get PDF
    Nowadays, there are still some gaps in the description of provenance metadata. These gaps prevent the capture of comprehensive provenance, useful for reuse and reproducibility. In addition, the lack of automated tools for capturing provenance hinders the broad generation and compilation of provenance information. This work presents a provenance engine (PE) that captures and represents provenance information using a combination of the Web Processing Service (WPS) standard and the ISO 19115 geospatial lineage model. The PE, developed within the MiraMon GIS & RS software, automatically records detailed information about sources and processes. The PE also includes a metadata editor that shows a graphical representation of the provenance and allows users to complement provenance information by adding missing processes or deleting redundant process steps or sources, thus building a consistent geospatial workflow. One use case is presented to demonstrate the usefulness and effectiveness of the PE: the generation of a radiometric pseudo-invariant areas bench for the Iberian Peninsula. This remote-sensing use case shows how provenance can be automatically captured, also in a non-sequential complex flow, and its essential role in the automation and replication tasks in work with very large amounts of geospatial data

    Automatic Generation of Geospatial Metadata for Web Resources

    Get PDF
    Web resources that are not part of any Spatial Data Infrastructure can be an important source of information. However, the incorporation of Web resources within a Spatial Data Infrastructure requires a significant effort to create metadata. This work presents an extensible architecture for an automatic characterisation of Web resources and a strategy for assignation of their geographic scope. The implemented prototype generates automatically geospatial metadata for Web pages. The metadata model conforms to the Common Element Set, a set of core properties, which is encouraged by the OGC Catalogue Service Specification to permit the minimal implementation of a catalogue service independent of an application profile. The performed experiments consisted in the creation of metadata for Web pages of providers of Geospatial Web resources. The Web pages have been gathered by a Web crawler focused on OGC Web Services. The manual revision of the results has shown that the coverage estimation method applied produces acceptable results for more than 80% of tested Web resources

    Geospatial Workflows and Trust: a Use Case for Provenance

    Get PDF
    At first glance the Astronomer by Vermeer, Tutankhamun’s burial mask, and a geospatial workflow may appear to have nothing in common. However, a commonality exists; each of these items can have a record of provenance detailing their history. Provenance is a record that shows who did what to an object, where this happened, and how and why these actions took place. In relation to the geospatial domain, provenance can be used to track and analyze the changes data has undergone in a workflow, and can facilitate scientific reproducibility. Collecting provenance from geospatial workflows and finding effective ways to use this provenance is an important application. When using geospatial data in a workflow it is important to determine if the data and workflow used are trustworthy. This study examines whether provenance can be collected from a geospatial workflow. Each workflow examined is a use case for a specific type of geospatial problem. In addition to this, the collected provenance is then used to determine workflow trust and content trust for each of the workflows examined in this study. The results of this study determined that provenance can be collected from a geospatial workflow in such a way as to be of use to additional applications, such as provenance interchange. From this collected provenance, content trust and workflow trust can be estimated. The simple workflow had a content trust value of .83 (trustworthy) and a workflow trust value of .44 (untrustworthy). Two additional workflows were examined for content trust and workflow trust. The methods used to calculate content trust and workflow trust could also be expanded to other types of geospatial data and workflows. Future research could include complete automation of the provenance collection and trust calculations, as well as examining additional techniques for deciding trust in relation to workflows

    Big Data Analytics in Static and Streaming Provenance

    Get PDF
    Thesis (Ph.D.) - Indiana University, Informatics and Computing,, 2016With recent technological and computational advances, scientists increasingly integrate sensors and model simulations to understand spatial, temporal, social, and ecological relationships at unprecedented scale. Data provenance traces relationships of entities over time, thus providing a unique view on over-time behavior under study. However, provenance can be overwhelming in both volume and complexity; the now forecasting potential of provenance creates additional demands. This dissertation focuses on Big Data analytics of static and streaming provenance. It develops filters and a non-preprocessing slicing technique for in-situ querying of static provenance. It presents a stream processing framework for online processing of provenance data at high receiving rate. While the former is sufficient for answering queries that are given prior to the application start (forward queries), the latter deals with queries whose targets are unknown beforehand (backward queries). Finally, it explores data mining on large collections of provenance and proposes a temporal representation of provenance that can reduce the high dimensionality while effectively supporting mining tasks like clustering, classification and association rules mining; and the temporal representation can be further applied to streaming provenance as well. The proposed techniques are verified through software prototypes applied to Big Data provenance captured from computer network data, weather models, ocean models, remote (satellite) imagery data, and agent-based simulations of agricultural decision making

    Towards Interoperable Research Infrastructures for Environmental and Earth Sciences

    Get PDF
    This open access book summarises the latest developments on data management in the EU H2020 ENVRIplus project, which brought together more than 20 environmental and Earth science research infrastructures into a single community. It provides readers with a systematic overview of the common challenges faced by research infrastructures and how a ‘reference model guided’ engineering approach can be used to achieve greater interoperability among such infrastructures in the environmental and earth sciences. The 20 contributions in this book are structured in 5 parts on the design, development, deployment, operation and use of research infrastructures. Part one provides an overview of the state of the art of research infrastructure and relevant e-Infrastructure technologies, part two discusses the reference model guided engineering approach, the third part presents the software and tools developed for common data management challenges, the fourth part demonstrates the software via several use cases, and the last part discusses the sustainability and future directions

    Uptake of sensor data in emergency management

    No full text
    While disasters are becoming larger, more complex and more frequent, traditional emergency management response capacities are not increasing at the same rate. Sensor capabilities could fill this gap by providing improved situational awareness, or intelligence, for emergency managers. Data from sensors is increasing exponentially in quality and quantity while the cost of capturing and processing these data is decreasing. This creates immense opportunities to bring sensor data into emergency management practices. Unfortunately, not all sensors are created equal. The accuracy, precision, presentation and timeliness of data varies depending on the source, the way the product is structured and who produces it. It is therefore difficult for emergency managers to incorporate sensor data into decision making, particularly when they have not seen the data type before, and do not know where it originated, or how to use. This thesis researches how data product creators can tailor products to increase the likelihood of their product being incorporated in emergency management decision making. It focuses on the issue of data product uptake, which is inclusion of data products in decision making processes. This issue has been poorly covered in the existing literature. This thesis synthesises literature from a range of disciplines then designs and conducts three targeted studies to build upon this knowledge. The first study compares four international data systems which use the same data source but make different choices in the design of their products, this then provides examples of the impacts of these design choices. The second study looks at disaster inquiries in Australia to consider how sensor data has been used in decision making in the past, and what lessons have been learnt from these experiences. The third study surveys Australian emergency managers to collect their views on what products they use, trust and what factors lead to that trust. The results from these studies combine to create a comprehensive collection of design choices available to data product creators. This collection covers not just technical choices like accuracy, but also presentational and data policy choices, to create a more holistic picture of how creators can influence their products. The collection is then presented in a framework which, if applied throughout product development, would be expected to increase uptake of sensor data in emergency management decision making. Design choices and user-oriented design processes are emphasised as a crucially important yet poorly-examined aspect of data uptake in emergency management. This thesis finds that trust is key to whether emergency managers use a product or not, and that trust is created through a series of design choices which can be grouped into quality, reputation, maturity and data policy
    corecore