277 research outputs found
RDF graph summarization: principles, techniques and applications (tutorial)
International audienceThe explosion in the amount of the RDF on the Web has lead to the need to explore, query and understand such data sources. The task is challenging due to the complex and heterogeneous structure of RDF graphs which, unlike relational databases, do not come with a structure-dictating schema. Summarization has been applied to RDF data to facilitate these tasks. Its purpose is to extract concise and meaningful information from RDF knowledge bases, representing their content as faithfully as possible. There is no single concept of RDF summary, and not a single but many approaches to build such summaries; the summarization goal, and the main computational tools employed for summarizing graphs, are the main factors behind this diversity. This tutorial presents a structured analysis and comparison existing works in the area of RDF summarization; it is based upon a recent survey which we co-authored with colleagues [3]. We present the concepts at the core of each approach, outline their main technical aspects and implementation. We conclude by identifying the most pertinent summarization method for different usage scenarios, and discussing areas where future effort is needed
OWL Reasoners still useable in 2023
In a systematic literature and software review over 100 OWL reasoners/systems
were analyzed to see if they would still be usable in 2023. This has never been
done in this capacity. OWL reasoners still play an important role in knowledge
organisation and management, but the last comprehensive surveys/studies are
more than 8 years old. The result of this work is a comprehensive list of 95
standalone OWL reasoners and systems using an OWL reasoner. For each item,
information on project pages, source code repositories and related
documentation was gathered. The raw research data is provided in a Github
repository for anyone to use
Recommended from our members
Reasoning with Data Flows and Policy Propagation Rules
Data-oriented systems and applications are at the centre of current developments of the World Wide Web. In these scenarios, assessing what policies propagate from the licenses of data sources to the output of a given data-intensive system is an important problem. Both policies and data flows can be described with Semantic Web languages. Although it is possible to define Policy Propagation Rules (PPR) by associating policies to data flow steps, this activity results in a huge number of rules to be stored and managed. In a recent paper, we introduced strategies for reducing the size of a PPR knowledge base by using an ontology of the possible relations between data objects, the Datanode ontology, and applying the (A)AAAA methodology, a knowledge engineering approach that exploits Formal Concept Analysis (FCA). In this article, we investigate whether this reasoning is feasible and how it can be performed. For this purpose, we study the impact of compressing a rule base associated with an inference mechanism on the performance of the reasoning process. Moreover, we report on an extension of the (A)AAAA methodology that includes a coherency check algorithm, that makes this reasoning possible. We show how this compression, in addition to being beneficial to the management of the knowledge base, also has a positive impact on the performance and resource requirements of the reasoning process for policy propagation
Decentralized provenance-aware publishing with nanopublications
Publication and archival of scientific results is still commonly considered the responsability of classical publishing companies. Classical forms of publishing, however, which center around printed narrative articles, no longer seem well-suited in the digital age. In particular, there exist currently no efficient, reliable, and agreed-upon methods for publishing scientific datasets, which have become increasingly important for science. In this article, we propose to design scientific data publishing as a web-based bottom-up process, without top-down control of central authorities such as publishing companies. Based on a novel combination of existing concepts and technologies, we present a server network to decentrally store and archive data in the form of nanopublications, an RDF-based format to represent scientific data. We show how this approach allows researchers to publish, retrieve, verify, and recombine datasets of nanopublications in a reliable and trustworthy manner, and we argue that this architecture could be used as a low-level data publication layer to serve the Semantic Web in general. Our evaluation of the current network shows that this system is efficient and reliable
Recommended from our members
OptiqueVQS: A visual query system over ontologies for industry
An important application of semantic technologies in industry has been the formalisation of information models using OWL 2 ontologies and the use of RDF for storing and exchanging application data. Moreover, legacy data can be virtualised as RDF using ontologies following the ontology-based data access (OBDA) approach. In all these applications, it is important to provide domain experts with query formulation tools for expressing their information needs in terms of queries over ontologies. In
this work, we present such a tool, OptiqueVQS, which is designed based on our experience with OBDA applications in Statoil and Siemens and on best HCI practices for interdisciplinary engineering environments. OptiqueVQS implements a number of unique
techniques distinguishing it from analogous query formulation systems. In particular, it exploits ontology projection techniques to enable graph-based navigation over an ontology during query construction. Secondly, while OptiqueVQS is primarily ontology driven, it exploits sampled data to enhance selection of data values for some data attributes. Finally, OptiqueVQS is built on
well-grounded requirements, design rationale, and quality attributes. We evaluated OptiqueVQS with both domain experts and casual users and qualitatively compared our system against prominent visual systems for ontology-driven query formulation and
exploration of semantic data. OptiqueVQS is available online and can be downloaded together with an example OBDA scenario
Virtual Knowledge Graphs: An Overview of Systems and Use Cases
In this paper, we present the virtual knowledge graph (VKG) paradigm for data integration and access, also known in the literature as Ontology-based Data Access. Instead of structuring the integration layer as a collection of relational tables, the VKG paradigm replaces the rigid structure of tables with the flexibility of graphs that are kept virtual and embed domain knowledge. We explain the main notions of this paradigm, its tooling ecosystem and significant use cases in a wide range of applications. Finally, we discuss future research directions
Deliverable D7.5 LinkedTV Dissemination and Standardisation Report v2
This deliverable presents the LinkedTV dissemination and standardisation report for the project period of months 19 to 30 (April 2013 to March 2014)
Awareness support for learning designers in collaborative authoring for adaptive learning
Adaptive learning systems offer students a range of appropriate learning options based on the learners’ characteristics. It is, therefore, necessary for such systems to maintain a hyperspace and knowledge space that consists of a large volume of domain and pedagogical knowledge, learner information, and adaptation rules. As a consequence, for a solitary teacher, developing learning resources would be time consuming and requires the teacher to be an expert of many topics. In this research, the problems of authoring adaptive learning resources are classified into issues concerning interoperability, efficiency, and collaboration.This research particularly addresses the question of how teachers can collaborate in authoring adaptive learning resources and be aware of what has happened in the authoring process. In order to experiment with collaboration, it was necessary to design a collaborative authoring environment for adaptive learning. This was achieved by extending an open sourced authoring tool of IMS Learning Design (IMS LD), ReCourse, to be a prototype of Collaborative ReCourse that includes the workspace awareness information features: Notes and History. It is designed as a tool for asynchronous collaboration for small groups of learning designers. IMS LD supports interoperability and adaptation. Two experiments were conducted. The first experiment was a workspace awareness study in which participants took part in an artificial collaborative scenario. They were divided into 2 groups; one group worked with ReCourse, the other with Collaborative ReCourse. The results provide evidence regarding the advantages of Notes and History for enhancing workspace awareness in collaborative authoring of learning designs.The second study tested the system more thoroughly as the participants had to work toward real goals over a much longer time frame. They were divided into four groups; two groups worked with ReCourse, while the others worked with Collaborative ReCourse. The experiment result showed that authoring of learning designs can be approached with a Process Structure method with implicit coordination and without role assignment. It also provides evidence that collaboration is possible for authoring IMS LD Level A for non-adapting and Level B for adapting materials. Notes and History assist in producing good quality output.This research has several contributions. From the literature study, it presents a comparison analysis of existing authoring tools, as well as learning standards. Furthermore, it presents a collaborative authoring approach for creating learning designs and describes the granularity level on which collaborative authoring for learning designs can be carried out. Finally, experiments using this approach show the advantages of having Notes and History for enhancing workspace awareness that and how they benefit the quality of learning designs
- …