1,073 research outputs found

    A BIM - GIS Integrated Information Model Using Semantic Web and RDF Graph Databases

    Get PDF
    In recent years, 3D virtual indoor and outdoor urban modelling has become an essential geospatial information framework for civil and engineering applications such as emergency response, evacuation planning, and facility management. Building multi-sourced and multi-scale 3D urban models are in high demand among architects, engineers, and construction professionals to achieve these tasks and provide relevant information to decision support systems. Spatial modelling technologies such as Building Information Modelling (BIM) and Geographical Information Systems (GIS) are frequently used to meet such high demands. However, sharing data and information between these two domains is still challenging. At the same time, the semantic or syntactic strategies for inter-communication between BIM and GIS do not fully provide rich semantic and geometric information exchange of BIM into GIS or vice-versa. This research study proposes a novel approach for integrating BIM and GIS using semantic web technologies and Resources Description Framework (RDF) graph databases. The suggested solution's originality and novelty come from combining the advantages of integrating BIM and GIS models into a semantically unified data model using a semantic framework and ontology engineering approaches. The new model will be named Integrated Geospatial Information Model (IGIM). It is constructed through three stages. The first stage requires BIMRDF and GISRDF graphs generation from BIM and GIS datasets. Then graph integration from BIM and GIS semantic models creates IGIMRDF. Lastly, the information from IGIMRDF unified graph is filtered using a graph query language and graph data analytics tools. The linkage between BIMRDF and GISRDF is completed through SPARQL endpoints defined by queries using elements and entity classes with similar or complementary information from properties, relationships, and geometries from an ontology-matching process during model construction. The resulting model (or sub-model) can be managed in a graph database system and used in the backend as a data-tier serving web services feeding a front-tier domain-oriented application. A case study was designed, developed, and tested using the semantic integrated information model for validating the newly proposed solution, architecture, and performance

    Archaeological palaeoenvironmental archives: challenges and potential

    Get PDF
    This Arts and Humanities Research Council (AHRC) sponsored collaborative doctoral project represents one of the most significant efforts to collate quantitative and qualitative data that can elucidate practices related to archaeological palaeoenvironmental archiving in England. The research has revealed that archived palaeoenvironmental remains are valuable resources for archaeological research and can clarify subjects that include the adoption and importation of exotic species, plant and insect invasion, human health and diet, and plant and animal husbandry practices. In addition to scientific research, archived palaeoenvironmental remains can provide evidence-based narratives of human resilience and climate change and offer evidence of the scientific process, making them ideal resources for public science engagement. These areas of potential have been realised at an imperative time; given that waterlogged palaeoenvironmental remains at significant sites such as Star Carr, Must Farm, and Flag Fen, archaeological deposits in towns and cities are at risk of decay due to climate change-related factors, and unsustainable agricultural practices. Innovative approaches to collecting and archiving palaeoenvironmental remains and maintaining existing archives will permit the creation of an accessible and thorough national resource that can service archaeologists and researchers in the related fields of biology and natural history. Furthermore, a concerted effort to recognise absences in archaeological archives, matched by an effort to supply these deficiencies, can produce a resource that can contribute to an enduring geographical and temporal record of England's biodiversity, which can be used in perpetuity in the face of diminishing archaeological and contemporary natural resources. To realise these opportunities, particular challenges must be overcome. The most prominent of these include inconsistent collection policies resulting from pressures associated with shortages in storage capacity and declining specialist knowledge in museums and repositories combined with variable curation practices. Many of these challenges can be resolved by developing a dedicated storage facility that can focus on the ongoing conservation and curation of palaeoenvironmental remains. Combined with an OASIS + module designed to handle and disseminate data pertaining to palaeoenvironmental archives, remains would be findable, accessible, and interoperable with biological archives and collections worldwide. Providing a national centre for curating palaeoenvironmental remains and a dedicated digital repository will require significant funding. Funding sources could be identified through collaboration with other disciplines. If sufficient funding cannot be identified, options that would require less financial investment, such as high-level archive audits and the production of guidance documents, will be able to assist all stakeholders with the improved curation, management, and promotion of the archived resource

    ENABLING KNOWLEDGE SHARING BY MANAGING DEPENDENCIES AND INTEROPERABILITY BETWEEN INTERLINKED SPATIAL KNOWLEDGE GRAPHS

    Get PDF
    Knowledge sharing is increasingly being recognized as necessary to address societal, economic, environmental, and public health challenges. This often requires collaboration between federal, local, and tribal governments along with the private sector, nonprofit organizations, and institutions of higher education. To achieve this, there needs to be a move away from data-centric to knowledge sharing architectures, such as a Geospatial Knowledge Infrastructure (GKI). Data from multiple organizations need to be properly contextualized in both space and time to support geographically based planning, decision making, cooperation and coordination. A spatial knowledge graph (SKG) is a useful paradigm for facilitating knowledge sharing and collaboration. However, interoperability between independently developed SKGs from different organizations that reference the same geographies is often not automated in a machine-readable way due to a lack of standardization. This paper outlines an architecture that automates interoperability and dependency management between SKGs as they are formally published by version and period of validity. We are calling this approach a spatial knowledge mesh (SKM), as it is a specialization of the data mesh architecture along with the concept of a common geo-registry to facilitate knowledge sharing more easily. The initial implementation, called GeoPrism Registry, is being developed as an open-source spatial knowledge infrastructure as a platform to help countries meet their NSDI and GKI objectives. It was fist funded and deployed to support ministries of health and is more recently being utilized in GeoPlatform.gov

    The evolution of ontology in AEC: A two-decade synthesis, application domains, and future directions

    Get PDF
    Ontologies play a pivotal role in knowledge representation, particularly beneficial for the Architecture, Engineering, and Construction (AEC) sector due to its inherent data diversity and intricacy. Despite the growing interest in ontology and data integration research, especially with the advent of knowledge graphs and digital twins, a noticeable lack of consolidated academic synthesis still needs to be addressed. This review paper aims to bridge that gap, meticulously analysing 142 journal articles from 2000 to 2021 on the application of ontologies in the AEC sector. The research is segmented through systematic evaluation into ten application domains within the construction realm- process, cost, operation/maintenance, health/safety, sustainability, monitoring/control, intelligent cities, heritage building information modelling (HBIM), compliance, and miscellaneous. This categorisation aids in pinpointing ontologies suitable for various research objectives. Furthermore, the paper highlights prevalent limitations within current ontology studies in the AEC sector. It offers strategic recommendations, presenting a well-defined path for future research to address these gaps

    Automatic Generation of Personalized Recommendations in eCoaching

    Get PDF
    Denne avhandlingen omhandler eCoaching for personlig livsstilsstøtte i sanntid ved bruk av informasjons- og kommunikasjonsteknologi. Utfordringen er å designe, utvikle og teknisk evaluere en prototyp av en intelligent eCoach som automatisk genererer personlige og evidensbaserte anbefalinger til en bedre livsstil. Den utviklede løsningen er fokusert på forbedring av fysisk aktivitet. Prototypen bruker bærbare medisinske aktivitetssensorer. De innsamlede data blir semantisk representert og kunstig intelligente algoritmer genererer automatisk meningsfulle, personlige og kontekstbaserte anbefalinger for mindre stillesittende tid. Oppgaven bruker den veletablerte designvitenskapelige forskningsmetodikken for å utvikle teoretiske grunnlag og praktiske implementeringer. Samlet sett fokuserer denne forskningen på teknologisk verifisering snarere enn klinisk evaluering.publishedVersio

    Digital agriculture: research, development and innovation in production chains.

    Get PDF
    Digital transformation in the field towards sustainable and smart agriculture. Digital agriculture: definitions and technologies. Agroenvironmental modeling and the digital transformation of agriculture. Geotechnologies in digital agriculture. Scientific computing in agriculture. Computer vision applied to agriculture. Technologies developed in precision agriculture. Information engineering: contributions to digital agriculture. DIPN: a dictionary of the internal proteins nanoenvironments and their potential for transformation into agricultural assets. Applications of bioinformatics in agriculture. Genomics applied to climate change: biotechnology for digital agriculture. Innovation ecosystem in agriculture: Embrapa?s evolution and contributions. The law related to the digitization of agriculture. Innovating communication in the age of digital agriculture. Driving forces for Brazilian agriculture in the next decade: implications for digital agriculture. Challenges, trends and opportunities in digital agriculture in Brazil

    Federated Data Modeling for Built Environment Digital Twins

    Get PDF
    The digital twin (DT) approach is an enabler for data-driven decision making in architecture, engineering, construction, and operations. Various open data models that can potentially support the DT developments, at different scales and application domains, can be found in the literature. However, many implementations are based on organization-specific information management processes and proprietary data models, hindering interoperability. This article presents the process and information management approaches developed to generate a federated open data model supporting DT applications. The business process modeling notation and transaction and interaction modeling techniques are applied to formalize the federated DT data modeling framework, organized in three main phases: requirements definition, federation, validation and improvement. The proposed framework is developed adopting the cross-disciplinary and multiscale principles. A validation on the development of the federated building-level DT data model for the West Cambridge Campus DT research facility is conducted. The federated data model is used to enable DT-based asset management applications at the building and built environment levels

    3D Visualisation - An Application and Assessment for Computer Network Traffic Analysis

    Full text link
    The intent of this research is to develop and assess the application of 3D data visualisation to the field of computer security. The growth of available data relating to computer networks necessitates a more efficient and effective way of presenting information to analysts in support of decision making and situational awareness. Advances in computer hardware and display software have made more complex and interactive presentation of data in 3D possible. While many attempts at creation of data-rich 3D displays have been made in the field of computer security, they have not become the tool of choice in the industry. There is also a limited amount of published research in the assessment of these tools in comparison to 2D graphical and tabular approaches to displaying the same data. This research was conducted through creation of a novel abstraction framework for visualisation of computer network data, the Visual Interactive Network Analysis Framework (VINAF). This framework was implemented in software and the software prototype was assessed using both a procedural approach applied to a published forensics challenge and also through a human participant based experiment. The key contributions to the fields of computer security and data visualisation made by this research include the creation of a novel abstraction framework for computer network traffic which features several new visualisation approaches. An implementation of this software was developed for the specific cybersecurity related task of computer network traffic analysis and published under an open source license to the cybersecurity community. The research contributes a novel approach to human-based experimentation developed during the COVID-19 pandemic and also implemented a novel procedure-based testing approach to the assessment of the prototype data visualisation tool. Results of the research showed, through procedural experimentation, that the abstraction framework is effective for network forensics tasks and exhibited several advantages when compared to alternate approaches. The user participation experiment indicated that most of the participants deemed the abstraction framework to be effective in several task related to computer network traffic analysis. There was not a strong indication that it would be preferred over existing approaches utilised by the participants, however, it would likely be used to augment existing methods

    Towards a human-centric data economy

    Get PDF
    Spurred by widespread adoption of artificial intelligence and machine learning, “data” is becoming a key production factor, comparable in importance to capital, land, or labour in an increasingly digital economy. In spite of an ever-growing demand for third-party data in the B2B market, firms are generally reluctant to share their information. This is due to the unique characteristics of “data” as an economic good (a freely replicable, non-depletable asset holding a highly combinatorial and context-specific value), which moves digital companies to hoard and protect their “valuable” data assets, and to integrate across the whole value chain seeking to monopolise the provision of innovative services built upon them. As a result, most of those valuable assets still remain unexploited in corporate silos nowadays. This situation is shaping the so-called data economy around a number of champions, and it is hampering the benefits of a global data exchange on a large scale. Some analysts have estimated the potential value of the data economy in US$2.5 trillion globally by 2025. Not surprisingly, unlocking the value of data has become a central policy of the European Union, which also estimated the size of the data economy in 827C billion for the EU27 in the same period. Within the scope of the European Data Strategy, the European Commission is also steering relevant initiatives aimed to identify relevant cross-industry use cases involving different verticals, and to enable sovereign data exchanges to realise them. Among individuals, the massive collection and exploitation of personal data by digital firms in exchange of services, often with little or no consent, has raised a general concern about privacy and data protection. Apart from spurring recent legislative developments in this direction, this concern has raised some voices warning against the unsustainability of the existing digital economics (few digital champions, potential negative impact on employment, growing inequality), some of which propose that people are paid for their data in a sort of worldwide data labour market as a potential solution to this dilemma [114, 115, 155]. From a technical perspective, we are far from having the required technology and algorithms that will enable such a human-centric data economy. Even its scope is still blurry, and the question about the value of data, at least, controversial. Research works from different disciplines have studied the data value chain, different approaches to the value of data, how to price data assets, and novel data marketplace designs. At the same time, complex legal and ethical issues with respect to the data economy have risen around privacy, data protection, and ethical AI practices. In this dissertation, we start by exploring the data value chain and how entities trade data assets over the Internet. We carry out what is, to the best of our understanding, the most thorough survey of commercial data marketplaces. In this work, we have catalogued and characterised ten different business models, including those of personal information management systems, companies born in the wake of recent data protection regulations and aiming at empowering end users to take control of their data. We have also identified the challenges faced by different types of entities, and what kind of solutions and technology they are using to provide their services. Then we present a first of its kind measurement study that sheds light on the prices of data in the market using a novel methodology. We study how ten commercial data marketplaces categorise and classify data assets, and which categories of data command higher prices. We also develop classifiers for comparing data products across different marketplaces, and we study the characteristics of the most valuable data assets and the features that specific vendors use to set the price of their data products. Based on this information and adding data products offered by other 33 data providers, we develop a regression analysis for revealing features that correlate with prices of data products. As a result, we also implement the basic building blocks of a novel data pricing tool capable of providing a hint of the market price of a new data product using as inputs just its metadata. This tool would provide more transparency on the prices of data products in the market, which will help in pricing data assets and in avoiding the inherent price fluctuation of nascent markets. Next we turn to topics related to data marketplace design. Particularly, we study how buyers can select and purchase suitable data for their tasks without requiring a priori access to such data in order to make a purchase decision, and how marketplaces can distribute payoffs for a data transaction combining data of different sources among the corresponding providers, be they individuals or firms. The difficulty of both problems is further exacerbated in a human-centric data economy where buyers have to choose among data of thousands of individuals, and where marketplaces have to distribute payoffs to thousands of people contributing personal data to a specific transaction. Regarding the selection process, we compare different purchase strategies depending on the level of information available to data buyers at the time of making decisions. A first methodological contribution of our work is proposing a data evaluation stage prior to datasets being selected and purchased by buyers in a marketplace. We show that buyers can significantly improve the performance of the purchasing process just by being provided with a measurement of the performance of their models when trained by the marketplace with individual eligible datasets. We design purchase strategies that exploit such functionality and we call the resulting algorithm Try Before You Buy, and our work demonstrates over synthetic and real datasets that it can lead to near-optimal data purchasing with only O(N) instead of the exponential execution time - O(2N) - needed to calculate the optimal purchase. With regards to the payoff distribution problem, we focus on computing the relative value of spatio-temporal datasets combined in marketplaces for predicting transportation demand and travel time in metropolitan areas. Using large datasets of taxi rides from Chicago, Porto and New York we show that the value of data is different for each individual, and cannot be approximated by its volume. Our results reveal that even more complex approaches based on the “leave-one-out” value, are inaccurate. Instead, more complex and acknowledged notions of value from economics and game theory, such as the Shapley value, need to be employed if one wishes to capture the complex effects of mixing different datasets on the accuracy of forecasting algorithms. However, the Shapley value entails serious computational challenges. Its exact calculation requires repetitively training and evaluating every combination of data sources and hence O(N!) or O(2N) computational time, which is unfeasible for complex models or thousands of individuals. Moreover, our work paves the way to new methods of measuring the value of spatio-temporal data. We identify heuristics such as entropy or similarity to the average that show a significant correlation with the Shapley value and therefore can be used to overcome the significant computational challenges posed by Shapley approximation algorithms in this specific context. We conclude with a number of open issues and propose further research directions that leverage the contributions and findings of this dissertation. These include monitoring data transactions to better measure data markets, and complementing market data with actual transaction prices to build a more accurate data pricing tool. A human-centric data economy would also require that the contributions of thousands of individuals to machine learning tasks are calculated daily. For that to be feasible, we need to further optimise the efficiency of data purchasing and payoff calculation processes in data marketplaces. In that direction, we also point to some alternatives to repetitively training and evaluating a model to select data based on Try Before You Buy and approximate the Shapley value. Finally, we discuss the challenges and potential technologies that help with building a federation of standardised data marketplaces. The data economy will develop fast in the upcoming years, and researchers from different disciplines will work together to unlock the value of data and make the most out of it. Maybe the proposal of getting paid for our data and our contribution to the data economy finally flies, or maybe it is other proposals such as the robot tax that are finally used to balance the power between individuals and tech firms in the digital economy. Still, we hope our work sheds light on the value of data, and contributes to making the price of data more transparent and, eventually, to moving towards a human-centric data economy.This work has been supported by IMDEA Networks InstitutePrograma de Doctorado en Ingeniería Telemática por la Universidad Carlos III de MadridPresidente: Georgios Smaragdakis.- Secretario: Ángel Cuevas Rumín.- Vocal: Pablo Rodríguez Rodrígue
    corecore