7,747 research outputs found

    A Survey on Forensics and Compliance Auditing for Critical Infrastructure Protection

    Get PDF
    The broadening dependency and reliance that modern societies have on essential services provided by Critical Infrastructures is increasing the relevance of their trustworthiness. However, Critical Infrastructures are attractive targets for cyberattacks, due to the potential for considerable impact, not just at the economic level but also in terms of physical damage and even loss of human life. Complementing traditional security mechanisms, forensics and compliance audit processes play an important role in ensuring Critical Infrastructure trustworthiness. Compliance auditing contributes to checking if security measures are in place and compliant with standards and internal policies. Forensics assist the investigation of past security incidents. Since these two areas significantly overlap, in terms of data sources, tools and techniques, they can be merged into unified Forensics and Compliance Auditing (FCA) frameworks. In this paper, we survey the latest developments, methodologies, challenges, and solutions addressing forensics and compliance auditing in the scope of Critical Infrastructure Protection. This survey focuses on relevant contributions, capable of tackling the requirements imposed by massively distributed and complex Industrial Automation and Control Systems, in terms of handling large volumes of heterogeneous data (that can be noisy, ambiguous, and redundant) for analytic purposes, with adequate performance and reliability. The achieved results produced a taxonomy in the field of FCA whose key categories denote the relevant topics in the literature. Also, the collected knowledge resulted in the establishment of a reference FCA architecture, proposed as a generic template for a converged platform. These results are intended to guide future research on forensics and compliance auditing for Critical Infrastructure Protection.info:eu-repo/semantics/publishedVersio

    Documenting Knowledge Graph Embedding and Link Prediction using Knowledge Graphs

    Get PDF
    In recent years, sub-symbolic learning, i.e., Knowledge Graph Embedding (KGE) incorporated with Knowledge Graphs (KGs) has gained significant attention in various downstream tasks (e.g., Link Prediction (LP)). These techniques learn a latent vector representation of KG's semantical structure to infer missing links. Nonetheless, the KGE models remain a black box, and the decision-making process behind them is not clear. Thus, the trustability and reliability of the model's outcomes have been challenged. While many state-of-the-art approaches provide data-driven frameworks to address these issues, they do not always provide a complete understanding, and the interpretations are not machine-readable. That is why, in this work, we extend a hybrid interpretable framework, InterpretME, in the field of the KGE models, especially for translation distance models, which include TransE, TransH, TransR, and TransD. The experimental evaluation on various benchmark KGs supports the validity of this approach, which we term Trace KGE. Trace KGE, in particular, contributes to increased interpretability and understanding of the perplexing KGE model's behavior

    Digital approaches to construction compliance checking: Validating the suitability of an ecosystem approach to compliance checking

    Get PDF
    The lifecycle of the built environment is governed by complex regulations, requirements and standards. Ensuring compliance against these requirements is a complicated process, affecting the entire supply chain and often incurring significant costs, delay and uncertainty. Many of the processes, and elements within these processes, are formalised and supported by varying levels of digitisation and automation. This ranges from energy simulation, geometric checking, to building information modelling based checking. However, there are currently no unifying standards or integrating technology to tie regulatory efforts together to enable the widespread adoption of automated compliance processes. This has left many current technical approaches, while advanced and robust, isolated. However, the increasing maturity of asset datasets/information models, means that integration of data/tools is now feasible. This paper will propose and validate a new approach of solving the problem of automated compliance checking through the use of an ecosystem of compliance checking services. This work has identified a clear research gap. How automated compliance checking in the construction sector can move beyond sole reliance on BIM data, and tightly coupled integration with software tools, to provide an extensible enough system to integrate the current isolated software elements currently used within compliance checking processes. To test this approach, an architecture for an ecosystem of compliance services will be specified. To validate this architecture, a prototype version will be developed and validated against requirements derived from the weaknesses of current approaches. This validation has found that a distributed ecosystem can perform accurately and successfully, whilst providing advantages in terms of scalability and extensibility. This approach provides a route to the increased adoption of automated compliance checking, overcoming the issues of relying on one computer system/application to perform all aspects of this process

    The DO-KB Knowledgebase: a 20-year journey developing the disease open science ecosystem.

    Get PDF
    In 2003, the Human Disease Ontology (DO, https://disease-ontology.org/) was established at Northwestern University. In the intervening 20 years, the DO has expanded to become a highly-utilized disease knowledge resource. Serving as the nomenclature and classification standard for human diseases, the DO provides a stable, etiology-based structure integrating mechanistic drivers of human disease. Over the past two decades the DO has grown from a collection of clinical vocabularies, into an expertly curated semantic resource of over 11300 common and rare diseases linking disease concepts through more than 37000 vocabulary cross mappings (v2023-08-08). Here, we introduce the recently launched DO Knowledgebase (DO-KB), which expands the DO\u27s representation of the diseaseome and enhances the findability, accessibility, interoperability and reusability (FAIR) of disease data through a new SPARQL service and new Faceted Search Interface. The DO-KB is an integrated data system, built upon the DO\u27s semantic disease knowledge backbone, with resources that expose and connect the DO\u27s semantic knowledge with disease-related data across Open Linked Data resources. This update includes descriptions of efforts to assess the DO\u27s global impact and improvements to data quality and content, with emphasis on changes in the last two years

    GPT models in construction industry: Opportunities, limitations, and a use case validation

    Get PDF
    Large Language Models (LLMs) trained on large data sets came into prominence in 2018 after Google introduced BERT. Subsequently, different LLMs such as GPT models from OpenAI have been released. These models perform well on diverse tasks and have been gaining widespread applications in fields such as business and education. However, little is known about the opportunities and challenges of using LLMs in the construction industry. Thus, this study aims to assess GPT models in the construction industry. A critical review, expert discussion and case study validation are employed to achieve the study's objectives. The findings revealed opportunities for GPT models throughout the project lifecycle. The challenges of leveraging GPT models are highlighted and a use case prototype is developed for materials selection and optimization. The findings of the study would be of benefit to researchers, practitioners and stakeholders, as it presents research vistas for LLMs in the construction industry

    EksPy: a new Python framework for developing graphical user interface based PyQt5

    Get PDF
    This study introduces EksPy Python framework, a novel framework designed for developing graphical user interface (GUI) applications in Python. EksPy framework is built on PyQt5, which is a collection of Python bindings for the Qt libraries, and it provides a user-friendly and intuitive interface. The comparative analysis of EksPy framework with existing frameworks such as Tkinter and PyQt highlights its notable features, including ease of use, rapid development, enhanced performance, effective database management, and the model-view-controller (MVC) concept. The experimental results illustrate that EksPy framework requires less code and enhances code readability, thereby facilitating better understanding and efficient development. Additionally, EksPy framework offers a modern and customizable appearance, surpassing Tkinter’s capabilities. Furthermore, it incorporates a built-in object-relational mapping (ORM) feature to simplify database interactions and adheres to the MVC architectural pattern. In conclusion, EksPy Python framework emerges as a powerful, user-friendly, and efficient framework for GUI application development in Python

    Computational reproducibility of Jupyter notebooks from biomedical publications

    Full text link
    Jupyter notebooks facilitate the bundling of executable code with its documentation and output in one interactive environment, and they represent a popular mechanism to document and share computational workflows. The reproducibility of computational aspects of research is a key component of scientific reproducibility but has not yet been assessed at scale for Jupyter notebooks associated with biomedical publications. We address computational reproducibility at two levels: First, using fully automated workflows, we analyzed the computational reproducibility of Jupyter notebooks related to publications indexed in PubMed Central. We identified such notebooks by mining the articles full text, locating them on GitHub and re-running them in an environment as close to the original as possible. We documented reproduction success and exceptions and explored relationships between notebook reproducibility and variables related to the notebooks or publications. Second, this study represents a reproducibility attempt in and of itself, using essentially the same methodology twice on PubMed Central over two years. Out of 27271 notebooks from 2660 GitHub repositories associated with 3467 articles, 22578 notebooks were written in Python, including 15817 that had their dependencies declared in standard requirement files and that we attempted to re-run automatically. For 10388 of these, all declared dependencies could be installed successfully, and we re-ran them to assess reproducibility. Of these, 1203 notebooks ran through without any errors, including 879 that produced results identical to those reported in the original notebook and 324 for which our results differed from the originally reported ones. Running the other notebooks resulted in exceptions. We zoom in on common problems, highlight trends and discuss potential improvements to Jupyter-related workflows associated with biomedical publications.Comment: arXiv admin note: substantial text overlap with arXiv:2209.0430

    Development of an Event Management Web Application For Students: A Focus on Back-end

    Get PDF
    Managing schedules can be challenging for students, with different calendars on various platforms leading to confusion and missed events. To address this problem, this thesis presents the development of an event management website designed to help students stay organized and motivated. With a focus on the application's back-end, this thesis explores the technology stack used to build the website and the implementation details of each chosen technology. By providing a detailed case study of the website development process, this thesis serves as a helpful resource for future developers looking to build their web applications

    A BIM - GIS Integrated Information Model Using Semantic Web and RDF Graph Databases

    Get PDF
    In recent years, 3D virtual indoor and outdoor urban modelling has become an essential geospatial information framework for civil and engineering applications such as emergency response, evacuation planning, and facility management. Building multi-sourced and multi-scale 3D urban models are in high demand among architects, engineers, and construction professionals to achieve these tasks and provide relevant information to decision support systems. Spatial modelling technologies such as Building Information Modelling (BIM) and Geographical Information Systems (GIS) are frequently used to meet such high demands. However, sharing data and information between these two domains is still challenging. At the same time, the semantic or syntactic strategies for inter-communication between BIM and GIS do not fully provide rich semantic and geometric information exchange of BIM into GIS or vice-versa. This research study proposes a novel approach for integrating BIM and GIS using semantic web technologies and Resources Description Framework (RDF) graph databases. The suggested solution's originality and novelty come from combining the advantages of integrating BIM and GIS models into a semantically unified data model using a semantic framework and ontology engineering approaches. The new model will be named Integrated Geospatial Information Model (IGIM). It is constructed through three stages. The first stage requires BIMRDF and GISRDF graphs generation from BIM and GIS datasets. Then graph integration from BIM and GIS semantic models creates IGIMRDF. Lastly, the information from IGIMRDF unified graph is filtered using a graph query language and graph data analytics tools. The linkage between BIMRDF and GISRDF is completed through SPARQL endpoints defined by queries using elements and entity classes with similar or complementary information from properties, relationships, and geometries from an ontology-matching process during model construction. The resulting model (or sub-model) can be managed in a graph database system and used in the backend as a data-tier serving web services feeding a front-tier domain-oriented application. A case study was designed, developed, and tested using the semantic integrated information model for validating the newly proposed solution, architecture, and performance
    corecore