855 research outputs found

    Using ontology in query answering systems: Scenarios, requirements and challenges

    Get PDF
    Equipped with the ultimate query answering system, computers would finally be in a position to address all our information needs in a natural way. In this paper, we describe how Language and Computing nv (L&C), a developer of ontology-based natural language understanding systems for the healthcare domain, is working towards the ultimate Question Answering (QA) System for healthcare workers. L&C’s company strategy in this area is to design in a step-by-step fashion the essential components of such a system, each component being designed to solve some one part of the total problem and at the same time reflect well-defined needs on the prat of our customers. We compare our strategy with the research roadmap proposed by the Question Answering Committee of the National Institute of Standards and Technology (NIST), paying special attention to the role of ontology

    Extending FAIR to FAIREr: Cognitive Interoperability and the Human Explorability of Data and Metadata

    Full text link
    Making data and metadata FAIR (Findable, Accessible, Interoperable, Reusable) has become an important objective in research and industry, and knowledge graphs and ontologies have been cornerstones in many going-FAIR strategies. In this process, however, human-actionability of data and metadata has been lost sight of. Here, in the first part, I discuss two issues exemplifying the lack of human-actionability in knowledge graphs and I suggest adding the Principle of human Explorability to extend FAIR to the FAIREr Guiding Principles. Moreover, in its interoperability framework and as part of its GoingFAIR strategy, the European Open Science Cloud initiative distinguishes between technical, semantic, organizational, and legal interoperability and I argue to add cognitive interoperability. In the second part, I provide a short introduction to semantic units and discuss how they increase the human explorability and cognitive interoperability of knowledge graphs. Semantic units structure a knowledge graph into identifiable and semantically meaningful subgraphs, each represented with its own resource that instantiates a corresponding semantic unit class. Three categories of semantic units can be distinguished: Statement units model individual propositions, compound units are semantically meaningful collections of semantic units, and question units model questions that translate into queries. I conclude with discussing how semantic units provide a framework for the development of innovative user interfaces that support exploring and accessing information in the graph by reducing its complexity to what currently interests the user, thereby significantly increasing the cognitive interoperability and thus human-actionability of knowledge graphs

    Collaborative recommendations with content-based filters for cultural activities via a scalable event distribution platform

    Get PDF
    Nowadays, most people have limited leisure time and the offer of (cultural) activities to spend this time is enormous. Consequently, picking the most appropriate events becomes increasingly difficult for end-users. This complexity of choice reinforces the necessity of filtering systems that assist users in finding and selecting relevant events. Whereas traditional filtering tools enable e.g. the use of keyword-based or filtered searches, innovative recommender systems draw on user ratings, preferences, and metadata describing the events. Existing collaborative recommendation techniques, developed for suggesting web-shop products or audio-visual content, have difficulties with sparse rating data and can not cope at all with event-specific restrictions like availability, time, and location. Moreover, aggregating, enriching, and distributing these events are additional requisites for an optimal communication channel. In this paper, we propose a highly-scalable event recommendation platform which considers event-specific characteristics. Personal suggestions are generated by an advanced collaborative filtering algorithm, which is more robust on sparse data by extending user profiles with presumable future consumptions. The events, which are described using an RDF/OWL representation of the EventsML-G2 standard, are categorized and enriched via smart indexing and open linked data sets. This metadata model enables additional content-based filters, which consider event-specific characteristics, on the recommendation list. The integration of these different functionalities is realized by a scalable and extendable bus architecture. Finally, focus group conversations were organized with external experts, cultural mediators, and potential end-users to evaluate the event distribution platform and investigate the possible added value of recommendations for cultural participation

    Decentralized provenance-aware publishing with nanopublications

    Get PDF
    Publication and archival of scientific results is still commonly considered the responsability of classical publishing companies. Classical forms of publishing, however, which center around printed narrative articles, no longer seem well-suited in the digital age. In particular, there exist currently no efficient, reliable, and agreed-upon methods for publishing scientific datasets, which have become increasingly important for science. In this article, we propose to design scientific data publishing as a web-based bottom-up process, without top-down control of central authorities such as publishing companies. Based on a novel combination of existing concepts and technologies, we present a server network to decentrally store and archive data in the form of nanopublications, an RDF-based format to represent scientific data. We show how this approach allows researchers to publish, retrieve, verify, and recombine datasets of nanopublications in a reliable and trustworthy manner, and we argue that this architecture could be used as a low-level data publication layer to serve the Semantic Web in general. Our evaluation of the current network shows that this system is efficient and reliable

    Combining Geospatial and Temporal Ontologies

    Get PDF
    Publicly available ontologies are growing in number at present. These ontologies describe entities in a domain and the relations among these entities. This thesis describes a method to automatically combine a pair of orthogonal ontologies using cross products. A geospatial ontology and a temporal ontology are combined in this work. Computing the cross product of the geospatial and the temporal ontologies gives a complete set of pairwise combination of terms from the two ontologies. This method offers researchers the benefit of using ontologies that are already existing and available rather than building new ontologies for areas outside their scope of expertise. The resulting framework describes a geospatial domain over all possible temporal granularities or levels, allowing one domain to be understood from the perspective of another domain. Further queries on the framework help a user to make higher order inferences about a domain. In this work, Protege, an open source ontology editor and a knowledge base tool, is used to model ontologies. Protege supports the creation, visualization and manipulation of ontologies in various formats including XML (Extensible Markup Language). Use of standard and extensible languages like XML allows sharing of data across different information systems, and thus supports reuse of these ontologies. Both the geospatial ontology and the temporal ontology are represented in Protege. This thesis demonstrates the usefulness of this integrated spatio-temporal framework for reasoning about geospatial domains. SQL queries can be applied to the cross product to return to the user different kinds of information about their domain. For example, a geospatial term Library can be combined with all terms from the temporal ontology to consider Library over all possible kinds of times, including those that might have been overlooked during previous analyses. Visualizations of cross product spaces using Graphviz provides a means for displaying the geospatial-temporal terms as well as the different relations that link these terms. This visualization step also highlights the structure of the cross product for users. In order to generate a more tractable cross product for analysis purposes, methods for filtering terms from the cross product are also introduced. Filtering results in a more focused understanding of the spatio-temporal framework

    RoboPlanner: Towards an Autonomous Robotic Action Planning Framework for Industry 4.0

    Get PDF
    Autonomous robots are being increasingly integrated into manufacturing, supply chain and retail industries due to the twin advantages of improved throughput and adaptivity. In order to handle complex Industry 4.0 tasks, the autonomous robots require robust action plans, that can self-adapt to runtime changes. A further requirement is efficient implementation of knowledge bases, that may be queried during planning and execution. In this paper, we propose RoboPlanner, a framework to generate action plans in autonomous robots. In RoboPlanner, we model the knowledge of world models, robotic capabilities and task templates using knowledge property graphs and graph databases. Design time queries and robotic perception are used to enable intelligent action planning. At runtime, integrity constraints on world model observations are used to update knowledge bases. We demonstrate these solutions on autonomous picker robots deployed in Industry 4.0 warehouses

    A Web GIS-based Integration of 3D Digital Models with Linked Open Data for Cultural Heritage Exploration

    Get PDF
    This PhD project explores how geospatial semantic web concepts, 3D web-based visualisation, digital interactive map, and cloud computing concepts could be integrated to enhance digital cultural heritage exploration; to offer long-term archiving and dissemination of 3D digital cultural heritage models; to better interlink heterogeneous and sparse cultural heritage data. The research findings were disseminated via four peer-reviewed journal articles and a conference article presented at GISTAM 2020 conference (which received the ‘Best Student Paper Award’)

    Knowledge Graph Building Blocks: An easy-to-use Framework for developing FAIREr Knowledge Graphs

    Full text link
    Knowledge graphs and ontologies provide promising technical solutions for implementing the FAIR Principles for Findable, Accessible, Interoperable, and Reusable data and metadata. However, they also come with their own challenges. Nine such challenges are discussed and associated with the criterion of cognitive interoperability and specific FAIREr principles (FAIR + Explorability raised) that they fail to meet. We introduce an easy-to-use, open source knowledge graph framework that is based on knowledge graph building blocks (KGBBs). KGBBs are small information modules for knowledge-processing, each based on a specific type of semantic unit. By interrelating several KGBBs, one can specify a KGBB-driven FAIREr knowledge graph. Besides implementing semantic units, the KGBB Framework clearly distinguishes and decouples an internal in-memory data model from data storage, data display, and data access/export models. We argue that this decoupling is essential for solving many problems of knowledge management systems. We discuss the architecture of the KGBB Framework as we envision it, comprising (i) an openly accessible KGBB-Repository for different types of KGBBs, (ii) a KGBB-Engine for managing and operating FAIREr knowledge graphs (including automatic provenance tracking, editing changelog, and versioning of semantic units); (iii) a repository for KGBB-Functions; (iv) a low-code KGBB-Editor with which domain experts can create new KGBBs and specify their own FAIREr knowledge graph without having to think about semantic modelling. We conclude with discussing the nine challenges and how the KGBB Framework provides solutions for the issues they raise. While most of what we discuss here is entirely conceptual, we can point to two prototypes that demonstrate the principle feasibility of using semantic units and KGBBs to manage and structure knowledge graphs
    • …
    corecore