901 research outputs found

    A state-of-the-art review of built environment information modelling (BeIM)

    Get PDF
    Elements that constitute the built environment are vast and so are the independent systems developed to model its various aspects. Many of these systems have been developed under various assumptions and approaches to execute functions that are distinct, complementary or sometimes similar. Also, these systems are ever increasing in number and often assume similar nomenclatures and acronyms thereby exacerbating the challenges of understanding their peculiar functions, definitions and differences. The current societal demand to improve sustainability performance through collaboration, whole-systems and through-life thinking, is driving the need to integrate independent systems associated with different aspects and scales of the built environment to deliver smart solutions and services that improve the wellbeing of citizens. The contemporary object-oriented digitization of real world elements appears to provide a leeway for amalgamating modelling systems of various domains in the built environment which we termed as built environment information modelling (BeIM). These domains included Architecture, Engineering, Construction and Urban Planning and Design. Applications such as Building Information Modelling, Geographic Information Systems and 3D City Modelling systems are now being integrated for city modelling purposes. The various works directed at integrating these systems are examined revealing that current research efforts on integration fall into three categories: (1) data/file conversion systems, (2) semantic mapping systems and (3) the hybrid of both. The review outcome suggests that a good knowledge of these domains and how their respective systems operate is vital to pursuing holistic systems integration in the built environment

    A Semantic IoT Early Warning System for Natural Environment Crisis Management

    Get PDF
    This work was supported in part by the European FP7 Funded Project TRIDEC under Grant 258723, the other project partners in helping to deliver the complete project Syste, in particular, GFZ, and the German Research Centre for Geosciences, Potsdam, Germany. The work of R. Tao was supported by the Queen Mary University of London for a Ph.D. studentship

    A Semantic loT Early Warning System for Natural Environment Crisis Management

    Get PDF
    An early warning system (EWS) is a core type of data driven Internet of Things (IoTs) system used for environment disaster risk and effect management. The potential benefits of using a semantic-type EWS include easier sensor and data source plug-and-play, simpler, richer, and more dynamic metadata-driven data analysis and easier service interoperability and orchestration. The challenges faced during practical deployments of semantic EWSs are the need for scalable time-sensitive data exchange and processing (especially involving heterogeneous data sources) and the need for resilience to changing ICT resource constraints in crisis zones. We present a novel IoT EWS system framework that addresses these challenges, based upon a multisemantic representation model.We use lightweight semantics for metadata to enhance rich sensor data acquisition.We use heavyweight semantics for top level W3CWeb Ontology Language ontology models describing multileveled knowledge-bases and semantically driven decision support and workflow orchestration. This approach is validated through determining both system related metrics and a case study involving an advanced prototype system of the semantic EWS, integrated with a reployed EWS infrastructure

    Geospatial crowdsourced data fitness analysis for spatial data infrastructure based disaster management actions

    Get PDF
    The reporting of disasters has changed from official media reports to citizen reporters who are at the disaster scene. This kind of crowd based reporting, related to disasters or any other events, is often identified as 'Crowdsourced Data' (CSD). CSD are freely and widely available thanks to the current technological advancements. The quality of CSD is often problematic as it is often created by the citizens of varying skills and backgrounds. CSD is considered unstructured in general, and its quality remains poorly defined. Moreover, the CSD's location availability and the quality of any available locations may be incomplete. The traditional data quality assessment methods and parameters are also often incompatible with the unstructured nature of CSD due to its undocumented nature and missing metadata. Although other research has identified credibility and relevance as possible CSD quality assessment indicators, the available assessment methods for these indicators are still immature. In the 2011 Australian floods, the citizens and disaster management administrators used the Ushahidi Crowd-mapping platform and the Twitter social media platform to extensively communicate flood related information including hazards, evacuations, help services, road closures and property damage. This research designed a CSD quality assessment framework and tested the quality of the 2011 Australian floods' Ushahidi Crowdmap and Twitter data. In particular, it explored a number of aspects namely, location availability and location quality assessment, semantic extraction of hidden location toponyms and the analysis of the credibility and relevance of reports. This research was conducted based on a Design Science (DS) research method which is often utilised in Information Science (IS) based research. Location availability of the Ushahidi Crowdmap and the Twitter data assessed the quality of available locations by comparing three different datasets i.e. Google Maps, OpenStreetMap (OSM) and Queensland Department of Natural Resources and Mines' (QDNRM) road data. Missing locations were semantically extracted using Natural Language Processing (NLP) and gazetteer lookup techniques. The Credibility of Ushahidi Crowdmap dataset was assessed using a naive Bayesian Network (BN) model commonly utilised in spam email detection. CSD relevance was assessed by adapting Geographic Information Retrieval (GIR) relevance assessment techniques which are also utilised in the IT sector. Thematic and geographic relevance were assessed using Term Frequency – Inverse Document Frequency Vector Space Model (TF-IDF VSM) and NLP based on semantic gazetteers. Results of the CSD location comparison showed that the combined use of non-authoritative and authoritative data improved location determination. The semantic location analysis results indicated some improvements of the location availability of the tweets and Crowdmap data; however, the quality of new locations was still uncertain. The results of the credibility analysis revealed that the spam email detection approaches are feasible for CSD credibility detection. However, it was critical to train the model in a controlled environment using structured training including modified training samples. The use of GIR techniques for CSD relevance analysis provided promising results. A separate relevance ranked list of the same CSD data was prepared through manual analysis. The results revealed that the two lists generally agreed which indicated the system's potential to analyse relevance in a similar way to humans. This research showed that the CSD fitness analysis can potentially improve the accuracy, reliability and currency of CSD and may be utilised to fill information gaps available in authoritative sources. The integrated and autonomous CSD qualification framework presented provides a guide for flood disaster first responders and could be adapted to support other forms of emergencies

    Report of the Stanford Linked Data Workshop

    No full text
    The Stanford University Libraries and Academic Information Resources (SULAIR) with the Council on Library and Information Resources (CLIR) conducted at week-long workshop on the prospects for a large scale, multi-national, multi-institutional prototype of a Linked Data environment for discovery of and navigation among the rapidly, chaotically expanding array of academic information resources. As preparation for the workshop, CLIR sponsored a survey by Jerry Persons, Chief Information Architect emeritus of SULAIR that was published originally for workshop participants as background to the workshop and is now publicly available. The original intention of the workshop was to devise a plan for such a prototype. However, such was the diversity of knowledge, experience, and views of the potential of Linked Data approaches that the workshop participants turned to two more fundamental goals: building common understanding and enthusiasm on the one hand and identifying opportunities and challenges to be confronted in the preparation of the intended prototype and its operation on the other. In pursuit of those objectives, the workshop participants produced:1. a value statement addressing the question of why a Linked Data approach is worth prototyping;2. a manifesto for Linked Libraries (and Museums and Archives and …);3. an outline of the phases in a life cycle of Linked Data approaches;4. a prioritized list of known issues in generating, harvesting & using Linked Data;5. a workflow with notes for converting library bibliographic records and other academic metadata to URIs;6. examples of potential “killer apps” using Linked Data: and7. a list of next steps and potential projects.This report includes a summary of the workshop agenda, a chart showing the use of Linked Data in cultural heritage venues, and short biographies and statements from each of the participants

    Interoperability and machine-to-machine translation model with mappings to machine learning tasks

    Get PDF
    Modern large-scale automation systems integrate thousands to hundreds of thousands of physical sensors and actuators. Demands for more flexible reconfiguration of production systems and optimization across different information models, standards and legacy systems challenge current system interoperability concepts. Automatic semantic translation across information models and standards is an increasingly important problem that needs to be addressed to fulfill these demands in a cost-efficient manner under constraints of human capacity and resources in relation to timing requirements and system complexity. Here we define a translator-based operational interoperability model for interacting cyber-physical systems in mathematical terms, which includes system identification and ontology-based translation as special cases. We present alternative mathematical definitions of the translator learning task and mappings to similar machine learning tasks and solutions based on recent developments in machine learning. Possibilities to learn translators between artefacts without a common physical context, for example in simulations of digital twins and across layers of the automation pyramid are briefly discussed.Comment: 7 pages, 2 figures, 1 table, 1 listing. Submitted to the IEEE International Conference on Industrial Informatics 2019, INDIN'1

    An information assistant system for the prevention of tunnel vision in crisis management

    Get PDF
    In the crisis management environment, tunnel vision is a set of bias in decision makers’ cognitive process which often leads to incorrect understanding of the real crisis situation, biased perception of information, and improper decisions. The tunnel vision phenomenon is a consequence of both the challenges in the task and the natural limitation in a human being’s cognitive process. An information assistant system is proposed with the purpose of preventing tunnel vision. The system serves as a platform for monitoring the on-going crisis event. All information goes through the system before arrives at the user. The system enhances the data quality, reduces the data quantity and presents the crisis information in a manner that prevents or repairs the user’s cognitive overload. While working with such a system, the users (crisis managers) are expected to be more likely to stay aware of the actual situation, stay open minded to possibilities, and make proper decisions

    A Geospatial Cyberinfrastructure for Urban Economic Analysis and Spatial Decision-Making

    Get PDF
    abstract: Urban economic modeling and effective spatial planning are critical tools towards achieving urban sustainability. However, in practice, many technical obstacles, such as information islands, poor documentation of data and lack of software platforms to facilitate virtual collaboration, are challenging the effectiveness of decision-making processes. In this paper, we report on our efforts to design and develop a geospatial cyberinfrastructure (GCI) for urban economic analysis and simulation. This GCI provides an operational graphic user interface, built upon a service-oriented architecture to allow (1) widespread sharing and seamless integration of distributed geospatial data; (2) an effective way to address the uncertainty and positional errors encountered in fusing data from diverse sources; (3) the decomposition of complex planning questions into atomic spatial analysis tasks and the generation of a web service chain to tackle such complex problems; and (4) capturing and representing provenance of geospatial data to trace its flow in the modeling task. The Greater Los Angeles Region serves as the test bed. We expect this work to contribute to effective spatial policy analysis and decision-making through the adoption of advanced GCI and to broaden the application coverage of GCI to include urban economic simulations

    Survey on Quality of Observation within Sensor Web Systems

    Get PDF
    The Sensor Web vision refers to the addition of a middleware layer between sensors and applications. To bridge the gap between these two layers, Sensor Web systems must deal with heterogeneous sources, which produce heterogeneous observations of disparate quality. Managing such diversity at the application level can be complex and requires high levels of expertise from application developers. Moreover, as an information-centric system, any Sensor Web should provide support for Quality of Observation (QoO) requirements. In practice, however, only few Sensor Webs provide satisfying QoO support and are able to deliver high-quality observations to end consumers in a specific manner. This survey aims to study why and how observation quality should be addressed in Sensor Webs. It proposes three original contributions. First, it provides important insights into quality dimensions and proposes to use the QoO notion to deal with information quality within Sensor Webs. Second, it proposes a QoO-oriented review of 29 Sensor Web solutions developed between 2003 and 2016, as well as a custom taxonomy to characterise some of their features from a QoO perspective. Finally, it draws four major requirements required to build future adaptive and QoO-aware Sensor Web solutions
    • …
    corecore