1,113 research outputs found

    Proceedings of the 2nd 4TU/14UAS Research Day on Digitalization of the Built Environment

    Get PDF

    Measuring the impact of COVID-19 on hospital care pathways

    Get PDF
    Care pathways in hospitals around the world reported significant disruption during the recent COVID-19 pandemic but measuring the actual impact is more problematic. Process mining can be useful for hospital management to measure the conformance of real-life care to what might be considered normal operations. In this study, we aim to demonstrate that process mining can be used to investigate process changes associated with complex disruptive events. We studied perturbations to accident and emergency (A &E) and maternity pathways in a UK public hospital during the COVID-19 pandemic. Co-incidentally the hospital had implemented a Command Centre approach for patient-flow management affording an opportunity to study both the planned improvement and the disruption due to the pandemic. Our study proposes and demonstrates a method for measuring and investigating the impact of such planned and unplanned disruptions affecting hospital care pathways. We found that during the pandemic, both A &E and maternity pathways had measurable reductions in the mean length of stay and a measurable drop in the percentage of pathways conforming to normative models. There were no distinctive patterns of monthly mean values of length of stay nor conformance throughout the phases of the installation of the hospital’s new Command Centre approach. Due to a deficit in the available A &E data, the findings for A &E pathways could not be interpreted

    Metal Cations in Protein Force Fields: From Data Set Creation and Benchmarks to Polarizable Force Field Implementation and Adjustment

    Get PDF
    Metal cations are essential to life. About one-third of all proteins require metal cofactors to accurately fold or to function. Computer simulations using empirical parameters and classical molecular mechanics models (force fields) are the standard tool to investigate proteins’ structural dynamics and functions in silico. Despite many successes, the accuracy of force fields is limited when cations are involved. The focus of this thesis is the development of tools and strategies to create system-specific force field parameters to accurately describe cation-protein interactions. The accuracy of a force field mainly relies on (i) the parameters derived from increasingly large quantum chemistry or experimental data and (ii) the physics behind the energy formula. The first part of this thesis presents a large and comprehensive quantum chemistry data set on a consistent computational footing that can be used for force field parameterization and benchmarking. The data set covers dipeptides of the 20 proteinogenic amino acids with different possible side chain protonation states, 3 divalent cations (Ca2+, Mg2+, and Ba2+), and a wide relative energy range. Crucial properties related to force field development, such as partial charges, interaction energies, etc., are also provided. To make the data available, the data set was uploaded to the NOMAD repository and its data structure was formalized in an ontology. Besides a proper data basis for parameterization, the physics covered by the terms of the additive force field formulation model impacts its applicability. The second part of this thesis benchmarks three popular non-polarizable force fields and the polarizable Drude model against a quantum chemistry data set. After some adjustments, the Drude model was found to reproduce the reference interaction energy substantially better than the non-polarizable force fields, which showed the importance of explicitly addressing polarization effects. Tweaking of the Drude model involved Boltzmann-weighted fitting to optimize Thole factors and Lennard-Jones parameters. The obtained parameters were validated by (i) their ability to reproduce reference interaction energies and (ii) molecular dynamics simulations of the N-lobe of calmodulin. This work facilitates the improvement of polarizable force fields for cation-protein interactions by quantum chemistry-driven parameterization combined with molecular dynamics simulations in the condensed phase. While the Drude model exhibits its potential simulating cation-protein interactions, it lacks description of charge transfer effects, which are significant between cation and protein. The CTPOL model extends the classical force field formulation by charge transfer (CT) and polarization (POL). Since the CTPOL model is not readily available in any of the popular molecular-dynamics packages, it was implemented in OpenMM. Furthermore, an open-source parameterization tool, called FFAFFURR, was implemented that enables the (system specific) parameterization of OPLS-AA and CTPOL models. Following the method established in the previous part, the performance of FFAFFURR was evaluated by its ability to reproduce quantum chemistry energies and molecular dynamics simulations of the zinc finger protein. In conclusion, this thesis steps towards the development of next-generation force fields to accurately describe cation-protein interactions by providing (i) reference data, (ii) a force field model that includes charge transfer and polarization, and (iii) a freely-available parameterization tool.Metallkationen sind für das Leben unerlässlich. Etwa ein Drittel aller Proteine benötigen Metall-Cofaktoren, um sich korrekt zu falten oder zu funktionieren. Computersimulationen unter Verwendung empirischer Parameter und klassischer Molekülmechanik-Modelle (Kraftfelder) sind ein Standardwerkzeug zur Untersuchung der strukturellen Dynamik und Funktionen von Proteinen in silico. Trotz vieler Erfolge ist die Genauigkeit der Kraftfelder begrenzt, wenn Kationen beteiligt sind. Der Schwerpunkt dieser Arbeit liegt auf der Entwicklung von Werkzeugen und Strategien zur Erstellung systemspezifischer Kraftfeldparameter zur genaueren Beschreibung von Kationen-Protein-Wechselwirkungen. Die Genauigkeit eines Kraftfelds hängt hauptsächlich von (i) den Parametern ab, die aus immer größeren quantenchemischen oder experimentellen Daten abgeleitet werden, und (ii) der Physik hinter der Kraftfeld-Formel. Im ersten Teil dieser Arbeit wird ein großer und umfassender quantenchemischer Datensatz auf einer konsistenten rechnerischen Grundlage vorgestellt, der für die Parametrisierung und das Benchmarking von Kraftfeldern verwendet werden kann. Der Datensatz umfasst Dipeptide der 20 proteinogenen Aminosäuren mit verschiedenen möglichen Seitenketten-Protonierungszuständen, 3 zweiwertige Kationen (Ca2+, Mg2+ und Ba2+) und einen breiten relativen Energiebereich. Wichtige Eigenschaften für die Entwicklung von Kraftfeldern, wie Wechselwirkungsenergien, Partialladungen usw., werden ebenfalls bereitgestellt. Um die Daten verfügbar zu machen, wurde der Datensatz in das NOMAD-Repository hochgeladen und seine Datenstruktur wurde in einer Ontologie formalisiert. Neben einer geeigneten Datenbasis für die Parametrisierung beeinflusst die Physik, die von den Termen des additiven Kraftfeld-Modells abgedeckt wird, dessen Anwendbarkeit. Der zweite Teil dieser Arbeit vergleicht drei populäre nichtpolarisierbare Kraftfelder und das polarisierbare Drude-Modell mit einem Datensatz aus der Quantenchemie. Nach einigen Anpassungen stellte sich heraus, dass das Drude-Modell die Referenzwechselwirkungsenergie wesentlich besser reproduziert als die nichtpolarisierbaren Kraftfelder, was zeigt, wie wichtig es ist, Polarisationseffekte explizit zu berücksichtigen. Die Anpassung des Drude-Modells umfasste eine Boltzmann-gewichtete Optimierung der Thole-Faktoren und Lennard-Jones-Parameter. Die erhaltenen Parameter wurden validiert durch (i) ihre Fähigkeit, Referenzwechselwirkungsenergien zu reproduzieren und (ii) Molekulardynamik-Simulationen des Calmodulin-N-Lobe. Diese Arbeit demonstriert die Verbesserung polarisierbarer Kraftfelder für Kationen-Protein-Wechselwirkungen durch quantenchemisch gesteuerte Parametrisierung in Kombination mit Molekulardynamiksimulationen in der kondensierten Phase. Während das Drude-Modell sein Potenzial bei der Simulation von Kation - Protein - Wechselwirkungen zeigt, fehlt ihm die Beschreibung von Ladungstransfereffekten, die zwischen Kation und Protein von Bedeutung sind. Das CTPOL-Modell erweitert die klassische Kraftfeldformulierung um den Ladungstransfer (CT) und die Polarisation (POL). Da das CTPOL-Modell in keinem der gängigen Molekulardynamik-Pakete verfügbar ist, wurde es in OpenMM implementiert. Außerdem wurde ein Open-Source-Parametrisierungswerkzeug namens FFAFFURR implementiert, welches die (systemspezifische) Parametrisierung von OPLS-AA und CTPOL-Modellen ermöglicht. In Anlehnung an die im vorangegangenen Teil etablierte Methode wurde die Leistung von FFAFFURR anhand seiner Fähigkeit, quantenchemische Energien und Molekulardynamiksimulationen des Zinkfingerproteins zu reproduzieren, bewertet. Zusammenfassend lässt sich sagen, dass diese Arbeit einen Schritt in Richtung der Entwicklung von Kraftfeldern der nächsten Generation zur genauen Beschreibung von Kationen-Protein-Wechselwirkungen darstellt, indem sie (i) Referenzdaten, (ii) ein Kraftfeldmodell, das Ladungstransfer und Polarisation einschließt, und (iii) ein frei verfügbares Parametrisierungswerkzeug bereitstellt

    OpenCitations Meta

    Full text link
    OpenCitations Meta is a new database that contains bibliographic metadata of scholarly publications involved in citations indexed by the OpenCitations infrastructure. It adheres to Open Science principles and provides data under a CC0 license for maximum reuse. The data can be accessed through a SPARQL endpoint, REST APIs, and dumps. OpenCitations Meta serves three important purposes. Firstly, it enables disambiguation of citations between publications described using different identifiers from various sources. For example, it can link publications identified by DOIs in Crossref and PMIDs in PubMed. Secondly, it assigns new globally persistent identifiers (PIDs), known as OpenCitations Meta Identifiers (OMIDs), to bibliographic resources without existing external persistent identifiers like DOIs. Lastly, by hosting the bibliographic metadata internally, OpenCitations Meta improves the speed of metadata retrieval for citing and cited documents. The database is populated through automated data curation, including deduplication, error correction, and metadata enrichment. The data is stored in RDF format following the OpenCitations Data Model, and changes and provenance information are tracked. OpenCitations Meta and its production. OpenCitations Meta currently incorporates data from Crossref, DataCite, and the NIH Open Citation Collection. In terms of semantic publishing datasets, it is currently the first in data volume.Comment: 26 pages, 7 figure

    CIDOC CRM as the basis of the Electronic State Register of Immovable Cultural Heritage of Ukraine

    Get PDF
    Abstract. The article is the final in a series of articles on Conceptual Provisions for the Creation of a New Electronic State Register of Immovable Cultural Heritage (CH) of Ukraine. These provisions correspond to the components of the Solutions Framework (SoFr) of special Spatial Information Systems (SpIS) defined in the monograph [15]. The special SpIS of the new registry of the CH of Ukraine should belong to the class of Atlas Geoinformation Systems (AGIS), which is described in [14]. The first queue of AGIS – AGIS-CH1 – is proposed as the first queue of the new electronic State Register of Immovable CH of Ukraine. The first queue should include, at least, three components that are simultaneously SoFr packages: Products-Processes-Basics. The conceptual provisions of AGIS-CH1 describe these three most important components of the architectural pattern of AGIS-CH1: AGIS-CH1.Products, Part 1 [17]; AGIS-CH1.Processes, Part 2 [18], AGIS-CH1.Basics, Part 3, consisting of two subparts, 3.1 and 3.2. Subpart 3.1 is described in the article [19]. This article describes subpart 3.2, which is called "Basics. CIDOC CRM". It consists of two main sections. The first of these initially examines the prerequisites that lead to the use of CIDOC CRM. Such prerequisites are two evolutions: system and subject. System evolution claims that the time has come to consider the registry of the CH of Ukraine as a SpIS of the Web 3.0 Formation, also known as the Semantic Web, especially if we have in mind the creation of a new registry. Subject evolution refers to the evolution of understanding of the domain of cultural heritage. From the review of this issue in the monograph [4], it is obvious that modern CH registers should be "process" rather than "product". In order to proceed to the consideration of CIDOC CRM with a better understanding of the essence of the issue, the CHARM model (Cultural Heritage Abstract Reference Model) was considered. CIDOC CRM can also be considered as such, but CHARM is described in an excellent monograph [4] that is practically applicable, unlike the scattered articles on CIDOC CRM. The second of the two main sections of the article deals with CIDOC CRM and its use. We do not describe CIDOC CRM completely. Attention is paid only to its "spatial" and "process" parts. In addition, attention is paid to the use of CIDOC CRM. For this, the information from the website (https://www.cidoc-crm.org/how-i-can-use-cidoc-crm, 2023-jun-26) is used first. Then there is some initial information about the Arches software platform. We offer the Arches platform for the implementation of AGIS-CH1. Key words: Solutions Framework (SoFr), Atlas geoinformation system (AGIS), Basics of AGIS SoFr, register of CH as the first queue of AGIS

    Digital Twins of production systems - Automated validation and update of material flow simulation models with real data

    Get PDF
    Um eine gute Wirtschaftlichkeit und Nachhaltigkeit zu erzielen, müssen Produktionssysteme über lange Zeiträume mit einer hohen Produktivität betrieben werden. Dies stellt produzierende Unternehmen insbesondere in Zeiten gesteigerter Volatilität, die z.B. durch technologische Umbrüche in der Mobilität, sowie politischen und gesellschaftlichen Wandel ausgelöst wird, vor große Herausforderungen, da sich die Anforderungen an das Produktionssystem ständig verändern. Die Frequenz von notwendigen Anpassungsentscheidungen und folgenden Optimierungsmaßnahmen steigt, sodass der Bedarf nach Bewertungsmöglichkeiten von Szenarien und möglichen Systemkonfigurationen zunimmt. Ein mächtiges Werkzeug hierzu ist die Materialflusssimulation, deren Einsatz aktuell jedoch durch ihre aufwändige manuelle Erstellung und ihre zeitlich begrenzte, projektbasierte Nutzung eingeschränkt wird. Einer längerfristigen, lebenszyklusbegleitenden Nutzung steht momentan die arbeitsintensive Pflege des Simulationsmodells, d.h. die manuelle Anpassung des Modells bei Veränderungen am Realsystem, im Wege. Das Ziel der vorliegenden Arbeit ist die Entwicklung und Umsetzung eines Konzeptes inkl. der benötigten Methoden, die Pflege und Anpassung des Simulationsmodells an die Realität zu automatisieren. Hierzu werden die zur Verfügung stehenden Realdaten genutzt, die aufgrund von Trends wie Industrie 4.0 und allgemeiner Digitalisierung verstärkt vorliegen. Die verfolgte Vision der Arbeit ist ein Digitaler Zwilling des Produktionssystems, der durch den Dateninput zu jedem Zeitpunkt ein realitätsnahes Abbild des Systems darstellt und zur realistischen Bewertung von Szenarien verwendet werden kann. Hierfür wurde das benötigte Gesamtkonzept entworfen und die Mechanismen zur automatischen Validierung und Aktualisierung des Modells entwickelt. Im Fokus standen dabei unter anderem die Entwicklung von Algorithmen zur Erkennung von Veränderungen in der Struktur und den Abläufen im Produktionssystem, sowie die Untersuchung des Einflusses der zur Verfügung stehenden Daten. Die entwickelten Komponenten konnten an einem realen Anwendungsfall der Robert Bosch GmbH erfolgreich eingesetzt werden und führten zu einer Steigerung der Realitätsnähe des Digitalen Zwillings, der erfolgreich zur Produktionsplanung und -optimierung eingesetzt werden konnte. Das Potential von Lokalisierungsdaten für die Erstellung von Digitalen Zwillingen von Produktionssystem konnte anhand der Versuchsumgebung der Lernfabrik des wbk Instituts für Produktionstechnik demonstriert werden

    Computational and human-based methods for knowledge discovery over knowledge graphs

    Get PDF
    The modern world has evolved, accompanied by the huge exploitation of data and information. Daily, increasing volumes of data from various sources and formats are stored, resulting in a challenging strategy to manage and integrate them to discover new knowledge. The appropriate use of data in various sectors of society, such as education, healthcare, e-commerce, and industry, provides advantages for decision support in these areas. However, knowledge discovery becomes challenging since data may come from heterogeneous sources with important information hidden. Thus, new approaches that adapt to the new challenges of knowledge discovery in such heterogeneous data environments are required. The semantic web and knowledge graphs (KGs) are becoming increasingly relevant on the road to knowledge discovery. This thesis tackles the problem of knowledge discovery over KGs built from heterogeneous data sources. We provide a neuro-symbolic artificial intelligence system that integrates symbolic and sub-symbolic frameworks to exploit the semantics encoded in a KG and its structure. The symbolic system relies on existing approaches of deductive databases to make explicit, implicit knowledge encoded in a KG. The proposed deductive database DSDS can derive new statements to ego networks given an abstract target prediction. Thus, DSDS minimizes data sparsity in KGs. In addition, a sub-symbolic system relies on knowledge graph embedding (KGE) models. KGE models are commonly applied in the KG completion task to represent entities in a KG in a low-dimensional vector space. However, KGE models are known to suffer from data sparsity, and a symbolic system assists in overcoming this fact. The proposed approach discovers knowledge given a target prediction in a KG and extracts unknown implicit information related to the target prediction. As a proof of concept, we have implemented the neuro-symbolic system on top of a KG for lung cancer to predict polypharmacy treatment effectiveness. The symbolic system implements a deductive system to deduce pharmacokinetic drug-drug interactions encoded in a set of rules through the Datalog program. Additionally, the sub-symbolic system predicts treatment effectiveness using a KGE model, which preserves the KG structure. An ablation study on the components of our approach is conducted, considering state-of-the-art KGE methods. The observed results provide evidence for the benefits of the neuro-symbolic integration of our approach, where the neuro-symbolic system for an abstract target prediction exhibits improved results. The enhancement of the results occurs because the symbolic system increases the prediction capacity of the sub-symbolic system. Moreover, the proposed neuro-symbolic artificial intelligence system in Industry 4.0 (I4.0) is evaluated, demonstrating its effectiveness in determining relatedness among standards and analyzing their properties to detect unknown relations in the I4.0KG. The results achieved allow us to conclude that the proposed neuro-symbolic approach for an abstract target prediction improves the prediction capability of KGE models by minimizing data sparsity in KGs

    Machine Learning Algorithm for the Scansion of Old Saxon Poetry

    Get PDF
    Several scholars designed tools to perform the automatic scansion of poetry in many languages, but none of these tools deal with Old Saxon or Old English. This project aims to be a first attempt to create a tool for these languages. We implemented a Bidirectional Long Short-Term Memory (BiLSTM) model to perform the automatic scansion of Old Saxon and Old English poems. Since this model uses supervised learning, we manually annotated the Heliand manuscript, and we used the resulting corpus as labeled dataset to train the model. The evaluation of the performance of the algorithm reached a 97% for the accuracy and a 99% of weighted average for precision, recall and F1 Score. In addition, we tested the model with some verses from the Old Saxon Genesis and some from The Battle of Brunanburh, and we observed that the model predicted almost all Old Saxon metrical patterns correctly misclassified the majority of the Old English input verses
    • …
    corecore