1,194 research outputs found

    A knowledge graph-supported information fusion approach for multi-faceted conceptual modelling

    Get PDF
    It has become progressively more evident that a single data source is unable to comprehensively capture the variability of a multi-faceted concept, such as product design, driving behaviour or human trust, which has diverse semantic orientations. Therefore, multi-faceted conceptual modelling is often conducted based on multi-sourced data covering indispensable aspects, and information fusion is frequently applied to cope with the high dimensionality and data heterogeneity. The consideration of intra-facets relationships is also indispensable. In this context, a knowledge graph (KG), which can aggregate the relationships of multiple aspects by semantic associations, was exploited to facilitate the multi-faceted conceptual modelling based on heterogeneous and semantic-rich data. Firstly, rules of fault mechanism are extracted from the existing domain knowledge repository, and node attributes are extracted from multi-sourced data. Through abstraction and tokenisation of existing knowledge repository and concept-centric data, rules of fault mechanism were symbolised and integrated with the node attributes, which served as the entities for the concept-centric knowledge graph (CKG). Subsequently, the transformation of process data to a stack of temporal graphs was conducted under the CKG backbone. Lastly, the graph convolutional network (GCN) model was applied to extract temporal and attribute correlation features from the graphs, and a temporal convolution network (TCN) was built for conceptual modelling using these features. The effectiveness of the proposed approach and the close synergy between the KG-supported approach and multi-faceted conceptual modelling is demonstrated and substantiated in a case study using real-world data

    The DO-KB Knowledgebase: a 20-year journey developing the disease open science ecosystem.

    Get PDF
    In 2003, the Human Disease Ontology (DO, https://disease-ontology.org/) was established at Northwestern University. In the intervening 20 years, the DO has expanded to become a highly-utilized disease knowledge resource. Serving as the nomenclature and classification standard for human diseases, the DO provides a stable, etiology-based structure integrating mechanistic drivers of human disease. Over the past two decades the DO has grown from a collection of clinical vocabularies, into an expertly curated semantic resource of over 11300 common and rare diseases linking disease concepts through more than 37000 vocabulary cross mappings (v2023-08-08). Here, we introduce the recently launched DO Knowledgebase (DO-KB), which expands the DO\u27s representation of the diseaseome and enhances the findability, accessibility, interoperability and reusability (FAIR) of disease data through a new SPARQL service and new Faceted Search Interface. The DO-KB is an integrated data system, built upon the DO\u27s semantic disease knowledge backbone, with resources that expose and connect the DO\u27s semantic knowledge with disease-related data across Open Linked Data resources. This update includes descriptions of efforts to assess the DO\u27s global impact and improvements to data quality and content, with emphasis on changes in the last two years

    Semantic rules for capability matchmaking in the context of manufacturing system design and reconfiguration

    Get PDF
    To survive in dynamic markets and meet the changing requirements, manufacturing companies must rapidly design new production systems and reconfigure existing ones. The current designer-centric search of feasible resources from various catalogues is a time-consuming and laborious process, which limits the consideration of many different alternative resource solutions. This article presents the implementation of an automatic capability matchmaking approach and software, which searches through resource catalogues to find feasible resources and resource combinations for the processing requirements of the product. The approach is based on formal ontology-based descriptions of both products and resources and the semantic rules used to find the matches. The article focuses on these rules implemented with SPIN rule language. They relate to 1) inferring and asserting parameters of combined capabilities of combined resources and 2) comparison of the product characteristics against the capability parameters of the resource (combination). The presented case study proves that the matchmaking system can find feasible matches. However, a human designer must validate the result when making the final resource selection. The approach should speed up the system design and reconfiguration planning and allow more alternative solutions be considered, compared with traditional manual design approaches.publishedVersionPeer reviewe

    Computer Vision and Architectural History at Eye Level:Mixed Methods for Linking Research in the Humanities and in Information Technology

    Get PDF
    Information on the history of architecture is embedded in our daily surroundings, in vernacular and heritage buildings and in physical objects, photographs and plans. Historians study these tangible and intangible artefacts and the communities that built and used them. Thus valuableinsights are gained into the past and the present as they also provide a foundation for designing the future. Given that our understanding of the past is limited by the inadequate availability of data, the article demonstrates that advanced computer tools can help gain more and well-linked data from the past. Computer vision can make a decisive contribution to the identification of image content in historical photographs. This application is particularly interesting for architectural history, where visual sources play an essential role in understanding the built environment of the past, yet lack of reliable metadata often hinders the use of materials. The automated recognition contributes to making a variety of image sources usable forresearch.<br/

    Machine Learning Algorithm for the Scansion of Old Saxon Poetry

    Get PDF
    Several scholars designed tools to perform the automatic scansion of poetry in many languages, but none of these tools deal with Old Saxon or Old English. This project aims to be a first attempt to create a tool for these languages. We implemented a Bidirectional Long Short-Term Memory (BiLSTM) model to perform the automatic scansion of Old Saxon and Old English poems. Since this model uses supervised learning, we manually annotated the Heliand manuscript, and we used the resulting corpus as labeled dataset to train the model. The evaluation of the performance of the algorithm reached a 97% for the accuracy and a 99% of weighted average for precision, recall and F1 Score. In addition, we tested the model with some verses from the Old Saxon Genesis and some from The Battle of Brunanburh, and we observed that the model predicted almost all Old Saxon metrical patterns correctly misclassified the majority of the Old English input verses

    Undergraduate and Graduate Course Descriptions, 2023 Spring

    Get PDF
    Wright State University undergraduate and graduate course descriptions from Spring 2023

    EVKG: An Interlinked and Interoperable Electric Vehicle Knowledge Graph for Smart Transportation System

    Full text link
    Over the past decade, the electric vehicle industry has experienced unprecedented growth and diversification, resulting in a complex ecosystem. To effectively manage this multifaceted field, we present an EV-centric knowledge graph (EVKG) as a comprehensive, cross-domain, extensible, and open geospatial knowledge management system. The EVKG encapsulates essential EV-related knowledge, including EV adoption, electric vehicle supply equipment, and electricity transmission network, to support decision-making related to EV technology development, infrastructure planning, and policy-making by providing timely and accurate information and analysis. To enrich and contextualize the EVKG, we integrate the developed EV-relevant ontology modules from existing well-known knowledge graphs and ontologies. This integration enables interoperability with other knowledge graphs in the Linked Data Open Cloud, enhancing the EVKG's value as a knowledge hub for EV decision-making. Using six competency questions, we demonstrate how the EVKG can be used to answer various types of EV-related questions, providing critical insights into the EV ecosystem. Our EVKG provides an efficient and effective approach for managing the complex and diverse EV industry. By consolidating critical EV-related knowledge into a single, easily accessible resource, the EVKG supports decision-makers in making informed choices about EV technology development, infrastructure planning, and policy-making. As a flexible and extensible platform, the EVKG is capable of accommodating a wide range of data sources, enabling it to evolve alongside the rapidly changing EV landscape

    Large Data-to-Text Generation

    Get PDF
    This thesis presents a domain-driven approach to sports game summarization, a specific instance of large data-to-text generation (DTG). We first address the data fidelity issue in the Rotowire dataset by supplementing existing input records and demonstrating larger relative improvements compared to previously proposed purification schemes. As this method further increases the total number of input records, we alternatively formulate this problem as a multimodal problem (i.e. visual data-to-text), discussing potential advantages over purely textual approaches and studying its effectiveness for future expansion. We work exclusively with pre-trained end-to-end transformers throughout, allowing us to evaluate the efficacy of sparse attention and multimodal encoder-decoders in DTG and providing appropriate benchmarks for future work. To automatically evaluate the statistical correctness of generated summaries, we also extend prior work on automatic relation extraction and build an updated pipeline that incorporates low amounts of human-annotated data which are quickly inflated via data augmentation. By formulating this in a ”text-to-text” fashion, we are able to take advantage of LLMs and achieve significantly higher precision and recall than previous methods while tracking three times the number of unique relations. Our updated models are more consistent and reliable by incorporating human-verified data partitions into the training and evaluation process
    • …
    corecore