10 research outputs found

    Supporting requirement elicitation and ontology testing in knowledge graph engineering

    Get PDF
    Knowledge graphs and ontologies are closely related concepts in the field of knowledge representation. In recent years, knowledge graphs have gained increasing popularity and are serving as essential components in many knowledge engineering projects that view them as crucial to their success. The conceptual foundation of the knowledge graph is provided by ontologies. Ontology modeling is an iterative engineering process that consists of steps such as the elicitation and formalization of requirements, the development, testing, refactoring, and release of the ontology. The testing of the ontology is a crucial and occasionally overlooked step of the process due to the lack of integrated tools to support it. As a result of this gap in the state-of-the-art, the testing of the ontology is completed manually, which requires a considerable amount of time and effort from the ontology engineers. The lack of tool support is noticed in the requirement elicitation process as well. In this aspect, the rise in the adoption and accessibility of knowledge graphs allows for the development and use of automated tools to assist with the elicitation of requirements from such a complementary source of data. Therefore, this doctoral research is focused on developing methods and tools that support the requirement elicitation and testing steps of an ontology engineering process. To support the testing of the ontology, we have developed XDTesting, a web application that is integrated with the GitHub platform that serves as an ontology testing manager. Concurrently, to support the elicitation and documentation of competency questions, we have defined and implemented RevOnt, a method to extract competency questions from knowledge graphs. Both methods are evaluated through their implementation and the results are promising

    Semantic Integration of MIR Datasets with the Polifonia Ontology Network

    Get PDF
    Integration between different data formats, and between data belonging to different collections, is an ongoing challenge in the MIR field. Semantic Web tools have proved to be promising resources for making different types of music information interoperable. However, the use of these technologies has so far been limited and scattered in the field. To address this, the Polifonia project is developing an ontological ecosystem that can cover a wide variety of musical aspects (musical features, instruments, emotions, performances). In this paper, we present the Polifonia Ontology Network, an ecosystem that enables and fosters the transition towards semantic MIR

    A Decentralized Approach to Validating Personal Data Using a Combination of Blockchains and Linked Data

    No full text
    The objective of this study is to define a model of personal data validationin the context of decentralized systems. The distributed nature of Linked Data,through DBpedia, is integrated with Blockchain data storage in a conceptualmodel. This model is illustrated through multiple use cases that serve as proofsof concepts. We have constructed a set of rules for validating Linked Data andpropose to implement them in smart contracts to implement a decentraliseddata validator. A part of the conceptual workflow is implemented through aweb interface using Open BlockChain and DBpedia Spotlight

    Automated multimodal sensemaking: Ontology-based integration of linguistic frames and visual data

    No full text
    Frame evocation from visual data is an essential process for multimodal sensemaking, due to the multimodal abstraction provided by frame semantics. However, there is a scarcity of data-driven approaches and tools to automate it. We propose a novel approach for explainable automated multimodal sensemaking by linking linguistic frames to their physical visual occurrences, using ontology-based knowledge engineering techniques. We pair the evocation of linguistic frames from text to visual data as “framal visual manifestations”. We present a deep ontological analysis of the implicit data model of the Visual Genome image dataset, and its formalization in the novel Visual Sense Ontology (VSO). To enhance the multimodal data from this dataset, we introduce a framal knowledge expansion pipeline that extracts and connects linguistic frames – including values and emotions – to images, using multiple linguistic resources for disambiguation. It then introduces the Visual Sense Knowledge Graph (VSKG), a novel resource. VSKG is a queryable knowledge graph that enhances the accessibility and comprehensibility of Visual Genome's multimodal data, based on SPARQL queries. VSKG includes frame visual evocation data, enabling more advanced forms of explicit reasoning, analysis and sensemaking. Our work represents a significant advancement in the automation of frame evocation and multimodal sense-making, performed in a fully interpretable and transparent way, with potential applications in various fields, including the fields of knowledge representation, computer vision, and natural language processing

    D1.3: Pilots development – collaborative methodology and tools (V1.0)

    No full text
    The deliverable reports on the collaborative methodology and tools for the technical development of the Pilots of the Polifonia Project. Technical development is coordinated by a Technical Board that designed a methodology inspired by agile software development methodologies, adapted to the needs of a research project consortium. Developers and domain experts are engaged in collaborative workshops in a co-creation process that leads to the identification of task-oriented working groups. These are developed autonomously and associated to Work Package activities. Technical outputs of the activities are collected and harmonised into a Polifonia Ecosystem - a collection of resources for musical cultural heritage preservation and reuse (software, end-user tools, data, requirement specifications, documentation, etc...). The collaborative tools for development are centred on a share space on GitHub, a Discord server for instant messaging, and a mailing list. The deliverable report on the initial work conducted on the pilots, particularly focusing on highlighting collaboration among consortium partners and shared of expertise between domain experts (musicologists, music historians) and technology experts. Finally, the deliverable illustrates preliminary plans for a Polifonia Web Portal, an aggregator of Musical Heritage Knowledge

    D1.1 Roadmap and pilot requirements 1st version

    No full text
    Polifonia is driven by ten pilot use cases, which provide both a validation context and the input requirements to the other research and development work packages. The pilots are heterogeneous in terms of knowledge domains, e.g. bells heritage, popular Irish music, history of music in Bologna, music influence on children. They involve interdisciplinary teams that bring different experience and methodological practices. This report describes the effort made so far, in collaboration and accordance with the Technical Board (see Deliverable 1.31), towards identifying and building a common methodological framework, called socio-technical roadmap, for developing the ten pilots. Three main tools have been designed and implemented to this end, following a bottom up approach: a) A story-based methodology that leads musicologists, linguists, music heritage actors and IT specialists to describe their scientific skills, requests and goals in a narrative plot; b) Interdisciplinary workshops called Maninpasta that support the creation and coordination of working groups focusing on specific development tasks; and c) A Survey to systematically collect information about the pilots, and to facilitate the identification of interconnections between them. After situating this deliverable in Polifonia’s overall architecture, the tools and methods are presented individually and then compared. The objectives achieved and the prospects are finally outlined. Six short appendixes make this report self-contained, including a glossary of terms, a description of the ten pilots, and a description of Polifonia work packages’ organisation

    Linked Open Data Validity -- A Technical Report from ISWS 2018

    No full text
    Linked Open Data (LOD) is the publicly available RDF data in the Web. Each LOD entity is identfied by a URI and accessible via HTTP. LOD encodes globalscale knowledge potentially available to any human as well as artificial intelligence that may want to benefit from it as background knowledge for supporting their tasks. LOD has emerged as the backbone of applications in diverse fields such as Natural Language Processing, Information Retrieval, Computer Vision, Speech Recognition, and many more. Nevertheless, regardless of the specific tasks that LOD-based tools aim to address, the reuse of such knowledge may be challenging for diverse reasons, e.g. semantic heterogeneity, provenance, and data quality. As aptly stated by Heath et al. Linked Data might be outdated, imprecise, or simply wrong": there arouses a necessity to investigate the problem of linked data validity. This work reports a collaborative effort performed by nine teams of students, guided by an equal number of senior researchers, attending the International Semantic Web Research School (ISWS 2018) towards addressing such investigation from different perspectives coupled with different approaches to tackle the issue

    Linked Open Data Validity -- A Technical Report from ISWS 2018

    No full text
    Linked Open Data (LOD) is the publicly available RDF data in the Web. Each LOD entity is identfied by a URI and accessible via HTTP. LOD encodes globalscale knowledge potentially available to any human as well as artificial intelligence that may want to benefit from it as background knowledge for supporting their tasks. LOD has emerged as the backbone of applications in diverse fields such as Natural Language Processing, Information Retrieval, Computer Vision, Speech Recognition, and many more. Nevertheless, regardless of the specific tasks that LOD-based tools aim to address, the reuse of such knowledge may be challenging for diverse reasons, e.g. semantic heterogeneity, provenance, and data quality. As aptly stated by Heath et al. Linked Data might be outdated, imprecise, or simply wrong": there arouses a necessity to investigate the problem of linked data validity. This work reports a collaborative effort performed by nine teams of students, guided by an equal number of senior researchers, attending the International Semantic Web Research School (ISWS 2018) towards addressing such investigation from different perspectives coupled with different approaches to tackle the issue
    corecore