772 research outputs found

    A state-of-the-art review of built environment information modelling (BeIM)

    Get PDF
    Elements that constitute the built environment are vast and so are the independent systems developed to model its various aspects. Many of these systems have been developed under various assumptions and approaches to execute functions that are distinct, complementary or sometimes similar. Also, these systems are ever increasing in number and often assume similar nomenclatures and acronyms thereby exacerbating the challenges of understanding their peculiar functions, definitions and differences. The current societal demand to improve sustainability performance through collaboration, whole-systems and through-life thinking, is driving the need to integrate independent systems associated with different aspects and scales of the built environment to deliver smart solutions and services that improve the wellbeing of citizens. The contemporary object-oriented digitization of real world elements appears to provide a leeway for amalgamating modelling systems of various domains in the built environment which we termed as built environment information modelling (BeIM). These domains included Architecture, Engineering, Construction and Urban Planning and Design. Applications such as Building Information Modelling, Geographic Information Systems and 3D City Modelling systems are now being integrated for city modelling purposes. The various works directed at integrating these systems are examined revealing that current research efforts on integration fall into three categories: (1) data/file conversion systems, (2) semantic mapping systems and (3) the hybrid of both. The review outcome suggests that a good knowledge of these domains and how their respective systems operate is vital to pursuing holistic systems integration in the built environment

    A Knowledge-based Approach for Creating Detailed Landscape Representations by Fusing GIS Data Collections with Associated Uncertainty

    Get PDF
    Geographic Information Systems (GIS) data for a region is of different types and collected from different sources, such as aerial digitized color imagery, elevation data consisting of terrain height at different points in that region, and feature data consisting of geometric information and properties about entities above/below the ground in that region. Merging GIS data and understanding the real world information present explicitly or implicitly in that data is a challenging task. This is often done manually by domain experts because of their superior capability to efficiently recognize patterns, combine, reason, and relate information. When a detailed digital representation of the region is to be created, domain experts are required to make best-guess decisions about each object. For example, a human would create representations of entities by collectively looking at the data layers, noting even elements that are not visible, like a covered overpass or underwater tunnel of a certain width and length. Such detailed representations are needed for use by processes like visualization or 3D modeling in applications used by military, simulation, earth sciences and gaming communities. Many of these applications are increasingly using digitally synthesized visuals and require detailed digital 3D representations to be generated quickly after acquiring the necessary initial data. Our main thesis, and a significant research contribution of this work, is that this task of creating detailed representations can be automated to a very large extent using a methodology which first fuses all Geographic Information System (GIS) data sources available into knowledge base (KB) assertions (instances) representing real world objects using a subprocess called GIS2KB. Then using reasoning, implicit information is inferred to define detailed 3D entity representations using a geometry definition engine called KB2Scene. Semantic Web is used as the semantic inferencing system and is extended with a data extraction framework. This framework enables the extraction of implicit property information using data and image analysis techniques. The data extraction framework supports extraction of spatial relationship values and attribution of uncertainties to inferred details. Uncertainty is recorded per property and used under Zadeh fuzzy semantics to compute a resulting uncertainty for inferred assertional axioms. This is achieved by another major contribution of our research, a unique extension of the KB ABox Realization service using KB explanation services. Previous semantics based research in this domain has concentrated more on improving represented details through the addition of artifacts like lights, signage, crosswalks, etc. Previous attempts regarding uncertainty in assertions use a modified reasoner expressivity and calculus. Our work differs in that separating formal knowledge from data processing allows fusion of different heterogeneous data sources which share the same context. Imprecision is modeled through uncertainty on assertions without defining a new expressivity as long as KB explanation services are available for the used expressivity. We also believe that in our use case, this simplifies uncertainty calculations. The uncertainties are then available for user-decision at output. We show that the process of creating 3D visuals from GIS data sources can be more automated, modular, verifiable, and the knowledge base instances available for other applications to use as part of a common knowledge base. We define our method’s components, discuss advantages and limitations, and show sample results for the transportation domain

    GeoAI-enhanced Techniques to Support Geographical Knowledge Discovery from Big Geospatial Data

    Get PDF
    abstract: Big data that contain geo-referenced attributes have significantly reformed the way that I process and analyze geospatial data. Compared with the expected benefits received in the data-rich environment, more data have not always contributed to more accurate analysis. “Big but valueless” has becoming a critical concern to the community of GIScience and data-driven geography. As a highly-utilized function of GeoAI technique, deep learning models designed for processing geospatial data integrate powerful computing hardware and deep neural networks into various dimensions of geography to effectively discover the representation of data. However, limitations of these deep learning models have also been reported when People may have to spend much time on preparing training data for implementing a deep learning model. The objective of this dissertation research is to promote state-of-the-art deep learning models in discovering the representation, value and hidden knowledge of GIS and remote sensing data, through three research approaches. The first methodological framework aims to unify varied shadow into limited number of patterns, with the convolutional neural network (CNNs)-powered shape classification, multifarious shadow shapes with a limited number of representative shadow patterns for efficient shadow-based building height estimation. The second research focus integrates semantic analysis into a framework of various state-of-the-art CNNs to support human-level understanding of map content. The final research approach of this dissertation focuses on normalizing geospatial domain knowledge to promote the transferability of a CNN’s model to land-use/land-cover classification. This research reports a method designed to discover detailed land-use/land-cover types that might be challenging for a state-of-the-art CNN’s model that previously performed well on land-cover classification only.Dissertation/ThesisDoctoral Dissertation Geography 201

    Geospatial crowdsourced data fitness analysis for spatial data infrastructure based disaster management actions

    Get PDF
    The reporting of disasters has changed from official media reports to citizen reporters who are at the disaster scene. This kind of crowd based reporting, related to disasters or any other events, is often identified as 'Crowdsourced Data' (CSD). CSD are freely and widely available thanks to the current technological advancements. The quality of CSD is often problematic as it is often created by the citizens of varying skills and backgrounds. CSD is considered unstructured in general, and its quality remains poorly defined. Moreover, the CSD's location availability and the quality of any available locations may be incomplete. The traditional data quality assessment methods and parameters are also often incompatible with the unstructured nature of CSD due to its undocumented nature and missing metadata. Although other research has identified credibility and relevance as possible CSD quality assessment indicators, the available assessment methods for these indicators are still immature. In the 2011 Australian floods, the citizens and disaster management administrators used the Ushahidi Crowd-mapping platform and the Twitter social media platform to extensively communicate flood related information including hazards, evacuations, help services, road closures and property damage. This research designed a CSD quality assessment framework and tested the quality of the 2011 Australian floods' Ushahidi Crowdmap and Twitter data. In particular, it explored a number of aspects namely, location availability and location quality assessment, semantic extraction of hidden location toponyms and the analysis of the credibility and relevance of reports. This research was conducted based on a Design Science (DS) research method which is often utilised in Information Science (IS) based research. Location availability of the Ushahidi Crowdmap and the Twitter data assessed the quality of available locations by comparing three different datasets i.e. Google Maps, OpenStreetMap (OSM) and Queensland Department of Natural Resources and Mines' (QDNRM) road data. Missing locations were semantically extracted using Natural Language Processing (NLP) and gazetteer lookup techniques. The Credibility of Ushahidi Crowdmap dataset was assessed using a naive Bayesian Network (BN) model commonly utilised in spam email detection. CSD relevance was assessed by adapting Geographic Information Retrieval (GIR) relevance assessment techniques which are also utilised in the IT sector. Thematic and geographic relevance were assessed using Term Frequency – Inverse Document Frequency Vector Space Model (TF-IDF VSM) and NLP based on semantic gazetteers. Results of the CSD location comparison showed that the combined use of non-authoritative and authoritative data improved location determination. The semantic location analysis results indicated some improvements of the location availability of the tweets and Crowdmap data; however, the quality of new locations was still uncertain. The results of the credibility analysis revealed that the spam email detection approaches are feasible for CSD credibility detection. However, it was critical to train the model in a controlled environment using structured training including modified training samples. The use of GIR techniques for CSD relevance analysis provided promising results. A separate relevance ranked list of the same CSD data was prepared through manual analysis. The results revealed that the two lists generally agreed which indicated the system's potential to analyse relevance in a similar way to humans. This research showed that the CSD fitness analysis can potentially improve the accuracy, reliability and currency of CSD and may be utilised to fill information gaps available in authoritative sources. The integrated and autonomous CSD qualification framework presented provides a guide for flood disaster first responders and could be adapted to support other forms of emergencies

    CityGML in the Integration of BIM and the GIS: Challenges and Opportunities

    Get PDF
    CityGML (City Geography Markup Language) is the most investigated standard in the integration of building information modeling (BIM) and the geographic information system (GIS), and it is essential for digital twin and smart city applications. The new CityGML 3.0 has been released for a while, but it is still not clear whether its new features bring new challenges or opportunities to this research topic. Therefore, the aim of this study is to understand the state of the art of CityGML in BIM/GIS integration and to investigate the potential influence of CityGML3.0 on BIM/GIS integration. To achieve this aim, this study used a systematic literature review approach. In total, 136 papers from Web of Science (WoS) and Scopus were collected, reviewed, and analyzed. The main findings of this review are as follows: (1) There are several challenging problems in the IFC-to-CityGML conversion, including LoD (Level of Detail) mapping, solid-to-surface conversion, and semantic mapping. (2) The ‘space’ concept and the new LoD concept in CityGML 3.0 can bring new opportunities to LoD mapping and solid-to-surface conversion. (3) The Versioning module and the Dynamizer module can add dynamic semantics to the CityGML. (4) Graph techniques and scan-to-BIM offer new perspectives for facilitating the use of CityG

    Automatic Geospatial Data Conflation Using Semantic Web Technologies

    Get PDF
    Duplicate geospatial data collections and maintenance are an extensive problem across Australia government organisations. This research examines how Semantic Web technologies can be used to automate the geospatial data conflation process. The research presents a new approach where generation of OWL ontologies based on output data models and presenting geospatial data as RDF triples serve as the basis for the solution and SWRL rules serve as the core to automate the geospatial data conflation processes

    Proceedings of the 2nd 4TU/14UAS Research Day on Digitalization of the Built Environment

    Get PDF

    Report of the Stanford Linked Data Workshop

    No full text
    The Stanford University Libraries and Academic Information Resources (SULAIR) with the Council on Library and Information Resources (CLIR) conducted at week-long workshop on the prospects for a large scale, multi-national, multi-institutional prototype of a Linked Data environment for discovery of and navigation among the rapidly, chaotically expanding array of academic information resources. As preparation for the workshop, CLIR sponsored a survey by Jerry Persons, Chief Information Architect emeritus of SULAIR that was published originally for workshop participants as background to the workshop and is now publicly available. The original intention of the workshop was to devise a plan for such a prototype. However, such was the diversity of knowledge, experience, and views of the potential of Linked Data approaches that the workshop participants turned to two more fundamental goals: building common understanding and enthusiasm on the one hand and identifying opportunities and challenges to be confronted in the preparation of the intended prototype and its operation on the other. In pursuit of those objectives, the workshop participants produced:1. a value statement addressing the question of why a Linked Data approach is worth prototyping;2. a manifesto for Linked Libraries (and Museums and Archives and 
);3. an outline of the phases in a life cycle of Linked Data approaches;4. a prioritized list of known issues in generating, harvesting & using Linked Data;5. a workflow with notes for converting library bibliographic records and other academic metadata to URIs;6. examples of potential “killer apps” using Linked Data: and7. a list of next steps and potential projects.This report includes a summary of the workshop agenda, a chart showing the use of Linked Data in cultural heritage venues, and short biographies and statements from each of the participants
    • 

    corecore