25 research outputs found

    A Semantic IoT Early Warning System for Natural Environment Crisis Management

    Get PDF
    This work was supported in part by the European FP7 Funded Project TRIDEC under Grant 258723, the other project partners in helping to deliver the complete project Syste, in particular, GFZ, and the German Research Centre for Geosciences, Potsdam, Germany. The work of R. Tao was supported by the Queen Mary University of London for a Ph.D. studentship

    A Semantic loT Early Warning System for Natural Environment Crisis Management

    Get PDF
    An early warning system (EWS) is a core type of data driven Internet of Things (IoTs) system used for environment disaster risk and effect management. The potential benefits of using a semantic-type EWS include easier sensor and data source plug-and-play, simpler, richer, and more dynamic metadata-driven data analysis and easier service interoperability and orchestration. The challenges faced during practical deployments of semantic EWSs are the need for scalable time-sensitive data exchange and processing (especially involving heterogeneous data sources) and the need for resilience to changing ICT resource constraints in crisis zones. We present a novel IoT EWS system framework that addresses these challenges, based upon a multisemantic representation model.We use lightweight semantics for metadata to enhance rich sensor data acquisition.We use heavyweight semantics for top level W3CWeb Ontology Language ontology models describing multileveled knowledge-bases and semantically driven decision support and workflow orchestration. This approach is validated through determining both system related metrics and a case study involving an advanced prototype system of the semantic EWS, integrated with a reployed EWS infrastructure

    A semantic graph database for the interoperability of 3D GIS data

    Get PDF
    none6siIn the last decades, the use of information management systems in the building data processing led to radical changes to the methods of data production, documentation and archiving. In particular, the possibilities, given by these information systems, to visualize the 3D model and to formulate queries have placed the question of the information sharing in digital format. The integration of information systems represents an efficient solution for defining smart, sustainable and resilient projects, such as conservation and restoration processes, giving the possibilities to combine heterogeneous data. GIS provides a robust data storage system, a definition of topological and semantic relationships and spatial queries. 3D GIS makes possible the creation of three-dimensional model in a geospatial context. To promote the interoperability of GIS data, the present research aims first to analyse methods of conversion in CityGML and IndoorGML model, defining an ontological domain. This has led to the creation of a new enriched model, based on connections among the different elements of the urban model in GIS environment, and to the possibility to formulate queries based on these relations. The second step consists in collecting all data translated into a specific format that fill a graph database in a semantic web environment, while maintaining those relationships. The semantic web technology represents an efficient tool of interoperability that leaves open the possibility to import BIM data in the same graph database and to join both GIS and BIM models. The outcome will offer substantial benefits during the entire project life cycle. This methodology can also be applied to cultural heritage where the information management plays a key role.openMalinverni E.S.; Naticchia B.; Lerma Garcia J.L.; Gorreja A.; Lopez Uriarte J.; Di Stefano F.Malinverni, E. S.; Naticchia, B.; Lerma Garcia, J. L.; Gorreja, A.; Lopez Uriarte, J.; Di Stefano, F

    Geospatial information infrastructures to address spatial needs in health: Collaboration, challenges and opportunities

    Get PDF
    Most health-related issues such as public health outbreaks and epidemiological threats are better understood from a spatial–temporal perspective and, clearly demand related geospatial datasets and services so that decision makers may jointly make informed decisions and coordinate response plans. Although current health applications support a kind of geospatial features, these are still disconnected from the wide range of geospatial services and datasets that geospatial information infrastructures may bring into health. In this paper we are questioning the hypothesis whether geospatial information infrastructures, in terms of standards-based geospatial services, technologies, and data models as operational assets already in place, can be exploited by health applications for which the geospatial dimension is of great importance. This may be certainly addressed by defining better collaboration strategies to uncover and promote geospatial assets to the health community. We discuss the value of collaboration, as well as the opportunities that geographic information infrastructures offer to address geospatial challenges in health applications

    Scale aware modeling and monitoring of the urban energy chain

    Get PDF
    With energy modeling at different complexity levels for smart cities and the concurrent data availability revolution from connected devices, a steady surge in demand for spatial knowledge has been observed in the energy sector. This transformation occurs in population centers focused on efficient energy use and quality of life. Energy-related services play an essential role in this mix, as they facilitate or interact with all other city services. This trend is primarily driven by the current age of the Ger.: Energiewende or energy transition, a worldwide push towards renewable energy sources, increased energy use efficiency, and local energy production that requires precise estimates of local energy demand and production. This shift in the energy market occurs as the world becomes aware of human-induced climate change, to which the building stock has a significant contribution (40% in the European Union). At the current rate of refurbishment and building replacement, of the buildings existing in 2050 in the European Union, 75% would not be classified as energy-efficient. That means that substantial structural change in the built environment and the energy chain is required to achieve EU-wide goals concerning environmental and energy policy. These objectives provide strong motivation for this thesis work and are generally made possible by energy monitoring and modeling activities that estimate the urban energy needs and quantify the impact of refurbishment measures. To this end, a modeling library called aEneAs was developed in the scope of this thesis that can perform city-wide building energy modeling. The library performs its tasks at the level of a single building and was a first in its field, using standardized spatial energy data structures that allow for portability from one city to another. For data input, extensive use was made of digital twins provided from CAD, BIM, GIS, architectural models, and a plethora of energy data sources. The library first quantifies primary thermal energy demand and then the impact of refurbishment measures. Lastly, it estimates the potential of renewable energy production from solar radiation. aEneAs also includes network modeling components that consider energy distribution in the given context, showing a path toward data modeling and simulation required for distributed energy production at the neighborhood and district level. In order to validate modeling activities in solar radiation and green façade and roof installations, six spatial models were coupled with sensor installations. These digital twins are included in three experiments that highlight this monitoring side of the energy chain and portray energy-related use cases that utilize the spatially enabled web services SOS-SES-WNS, SensorThingsAPI, and FIWARE. To this author\u27s knowledge, this is the first work that surveys the capabilities of these three solutions in a unifying context, each having its specific design mindset. The modeling and monitoring activity and their corresponding literature review indicated gaps in scientific knowledge concerning data science in urban energy modeling. First, a lack of standardization regarding the spatial scales at which data is stored and used in urban energy modeling was observed. In order to identify the appropriate spatial levels for modeling and data aggregation, scale is explored in-depth in the given context and defined as a byproduct of resolution and extent, with ranges provided for both parameters. To that end, a survey of the encountered spatial scales and actors in six different geographical and cultural settings was performed. The information from this survey was used to put forth a standardized spatial scales definition and create a scale-dependent ontology for use in urban energy modeling. The ontology also provides spatially enabled persistent identifiers that resolve issues encountered with object relationships in modeling for inheritance, dependency, and association. The same survey also reveals two significant issues with data in urban energy modeling. These are data consistency across spatial scales and urban fabric contiguity. The impact of these issues and different solutions such as data generalization are explored in the thesis. Further advancement of scientific knowledge is provided specifically with spatial standards and spatial data infrastructure in urban energy modeling. A review of use cases in the urban energy chain and a taxonomy of the standards were carried out. These provide fundamental input for another piece of this thesis: inclusive software architecture methods that promote data integration and allow for external connectivity to modern and legacy systems. In order to reduce time-costly extraction, transformation, and load processes, databases and web services to ferry data to and from separate data sources were used. As a result, the spatial models become central linking elements of the different types of energy-related data in a novel perspective that differs from the traditional one, where spatial data tends to be non-interoperable / not linked with other data types. These distinct data fusion approaches provide flexibility in an energy chain environment with inconsistent data structures and software. Furthermore, the knowledge gathered from the experiments presented in this thesis is provided as a synopsis of good practices

    The Analysis of Open Source Software and Data for Establishment of GIS Services Throughout the Network in a Mapping Organization at National or International Level

    Get PDF
    Federal agencies and their partners collect and manage large amounts of geospatial data but it is often not easily found when needed, and sometimes data is collected or purchased multiple times. In short, the best government data is not always organized and managed efficiently to support decision making in a timely and cost effective manner. National mapping agencies, various Departments responsible for collection of different types of Geospatial data and their authorities cannot, for very long, continue to operate, as they did a few years ago like people living in an island. Leaders need to look at what is now possible that was not possible before, considering capabilities such as cloud computing, crowd sourced data collection, available Open source remotely sensed data and multi source information vital in decision-making as well as new Web-accessible services that provide, sometimes at no cost. Many of these services previously could be obtained only from local GIS experts. These authorities need to consider the available solution and gather information about new capabilities, reconsider agency missions and goals, review and revise policies, make budget and human resource for decisions, and evaluate new products, cloud services, and cloud service providers. To do so, we need, choosing the right tools to rich the above-mentioned goals. As we know, Data collection is the most cost effective part of the mapping and establishment of a Geographic Information system. However, it is not only because of the cost for the data collection task but also because of the damages caused by the delay and the time that takes to provide the user with proper information necessary for making decision from the field up to the user’s hand. In fact, the time consumption of a project for data collection, processing, and presentation of geospatial information has more effect on the cost of a bigger project such as disaster management, construction, city planning, environment, etc. Of course, with such a pre-assumption that we provide all the necessary information from the existing sources directed to user’s computer. The best description for a good GIS project optimization or improvement is finding a methodology to reduce the time and cost, and increase data and service quality (meaning; Accuracy, updateness, completeness, consistency, suitability, information content, integrity, integration capability, and fitness for use as well as user’s specific needs and conditions that must be addressed with a special attention). Every one of the above-mentioned issues must be addressed individually and at the same time, the whole solution must be provided in a global manner considering all the criteria. In this thesis at first, we will discuss about the problem we are facing and what is needed to be done as establishment of National Spatial Data Infra-Structure (NSDI), the definition and related components. Then after, we will be looking for available Open Source Software solutions to cover the whole process to manage; Data collection, Data base management system, data processing and finally data services and presentation. The first distinction among Software is whether they are, Open source and free or commercial and proprietary. It is important to note that in order to make distinction among softwares it is necessary to define a clear specification for this categorization. It is somehow very difficult to distinguish what software belongs to which class from legal point of view and therefore, makes it necessary to clarify what is meant by various terms. With reference to this concept there are 2 global distinctions then, inside each group, we distinguish another classification regarding their functionalities and applications they are made for in GIScience. According to the outcome of the second chapter, which is the technical process for selection of suitable and reliable software according to the characteristics of the users need and required components, we will come to next chapter. In chapter 3, we elaborate in to the details of the GeoNode software as our best candidate tools to take responsibilities of those issues stated before. In Chapter 4, we will discuss the existing Open Source Data globally available with the predefined data quality criteria (Such as theme, data content, scale, licensing, and coverage) according to the metadata statement inside the datasets by mean of bibliographic review, technical documentation and web search engines. We will discuss in chapter 5 further data quality concepts and consequently define sets of protocol for evaluation of all datasets according to the tasks that a mapping organization in general, needed to be responsible to the probable users in different disciplines such as; Reconnaissance, City Planning, Topographic mapping, Transportation, Environment control, disaster management and etc… In Chapter 6, all the data quality assessment and protocols will be implemented into the pre-filtered, proposed datasets. In the final scores and ranking result, each datasets will have a value corresponding to their quality according to the sets of rules that are defined in previous chapter. In last steps, there will be a vector of weight that is derived from the questions that has to be answered by user with reference to the project in hand in order to finalize the most appropriate selection of Free and Open Source Data. This Data quality preference has to be defined by identifying a set of weight vector, and then they have to be applied to the quality matrix in order to get a final quality scores and ranking. At the end of this chapter there will be a section presenting data sets utilization in various projects such as “ Early Impact Analysis” as well as “Extreme Rainfall Detection System (ERDS)- version 2” performed by ITHACA. Finally, in conclusion, the important criteria, as well as future trend in GIS software are discussed and at the end recommendations will be presented

    Sensor web geoprocessing on the grid

    Get PDF
    Recent standardisation initiatives in the fields of grid computing and geospatial sensor middleware provide an exciting opportunity for the composition of large scale geospatial monitoring and prediction systems from existing components. Sensor middleware standards are paving the way for the emerging sensor web which is envisioned to make millions of geospatial sensors and their data publicly accessible by providing discovery, task and query functionality over the internet. In a similar fashion, concurrent development is taking place in the field of grid computing whereby the virtualisation of computational and data storage resources using middleware abstraction provides a framework to share computing resources. Sensor web and grid computing share a common vision of world-wide connectivity and in their current form they are both realised using web services as the underlying technological framework. The integration of sensor web and grid computing middleware using open standards is expected to facilitate interoperability and scalability in near real-time geoprocessing systems. The aim of this thesis is to develop an appropriate conceptual and practical framework in which open standards in grid computing, sensor web and geospatial web services can be combined as a technological basis for the monitoring and prediction of geospatial phenomena in the earth systems domain, to facilitate real-time decision support. The primary topic of interest is how real-time sensor data can be processed on a grid computing architecture. This is addressed by creating a simple typology of real-time geoprocessing operations with respect to grid computing architectures. A geoprocessing system exemplar of each geoprocessing operation in the typology is implemented using contemporary tools and techniques which provides a basis from which to validate the standards frameworks and highlight issues of scalability and interoperability. It was found that it is possible to combine standardised web services from each of these aforementioned domains despite issues of interoperability resulting from differences in web service style and security between specifications. A novel integration method for the continuous processing of a sensor observation stream is suggested in which a perpetual processing job is submitted as a single continuous compute job. Although this method was found to be successful two key challenges remain; a mechanism for consistently scheduling real-time jobs within an acceptable time-frame must be devised and the tradeoff between efficient grid resource utilisation and processing latency must be balanced. The lack of actual implementations of distributed geoprocessing systems built using sensor web and grid computing has hindered the development of standards, tools and frameworks in this area. This work provides a contribution to the small number of existing implementations in this field by identifying potential workflow bottlenecks in such systems and gaps in the existing specifications. Furthermore it sets out a typology of real-time geoprocessing operations that are anticipated to facilitate the development of real-time geoprocessing software.EThOS - Electronic Theses Online ServiceEngineering and Physical Sciences Research Council (EPSRC) : School of Civil Engineering & Geosciences, Newcastle UniversityGBUnited Kingdo

    Earth Observation Open Science and Innovation

    Get PDF
    geospatial analytics; social observatory; big earth data; open data; citizen science; open innovation; earth system science; crowdsourced geospatial data; citizen science; science in society; data scienc

    Designing a Framework for Exchanging Partial Sets of BIM Information on a Cloud-Based Service

    Get PDF
    The rationale behind this research study was based on the recognised difficulty of exchanging data at element or object level due to the inefficiencies of compatible hardware and software. Interoperability depicts the need to pass data between applications, allowing multiple types of experts and applications to contribute to the work at hand. The only way that software file exchanges between two applications can produce consistent data and change management results for large projects is through a building model repository. The overall aim of this thesis was to design and develop an integrated process that would advance key decisions at an early design stage through faster information exchanges during collaborative work. In the construction industry, Building Information Modeling is the most integrated shared model between all disciplines. It is based on a manufacturing-like process where standardised deliverables are used throughout the life cycle with effective collaboration as its main driving force. However, the dilemma is how to share these properties of BIM applications on one single platform asynchronously. Cloud Computing is a centralized heterogeneous network that enables different applications to be connected to each other. The methodology used in the research was based on triangulation of data which incorporated many techniques featuring a mixture of both quantitative and qualitative analysis. The results identified the need to re-engineer Simplified Markup Language, in order to exchange partial data sets of intelligent object architecture on an integrated platform. The designed and tested prototype produced findings that enhanced project decisions at a relatively early design stage, improved communication and collaboration techniques and cross disciple co-ordination
    corecore