5 research outputs found
Two Approaches to Ontology Aggregation Based on Axiom Weakening
Axiom weakening is a novel technique that allows
for fine-grained repair of inconsistent ontologies.
In a multi-agent setting, integrating ontologies corresponding
to multiple agents may lead to inconsistencies.
Such inconsistencies can be resolved after
the integrated ontology has been built, or their
generation can be prevented during ontology generation.
We implement and compare these two approaches.
First, we study how to repair an inconsistent
ontology resulting from a voting-based aggregation
of views of heterogeneous agents. Second,
we prevent the generation of inconsistencies by letting
the agents engage in a turn-based rational protocol
about the axioms to be added to the integrated
ontology. We instantiate the two approaches using
real-world ontologies and compare them by measuring
the levels of satisfaction of the agents w.r.t.
the ontology obtained by the two procedures
Towards Even More Irresistible Axiom Weakening
Axiom weakening is a technique that allows for a fine-grained repair of inconsistent ontologies. Its main advantage is that it repairs on- tologies by making axioms less restrictive rather than by deleting them, employing the use of refinement operators. In this paper, we build on pre- viously introduced axiom weakening for ALC, and make it much more irresistible by extending its definitions to deal with SROIQ, the expressive and decidable description logic underlying OWL 2 DL. We extend the definitions of refinement operator to deal with SROIQ constructs, in particular with role hierarchies, cardinality constraints and nominals, and illustrate its application. Finally, we discuss the problem of termi- nation of an iterated weakening procedure
Towards a Logic of Epistemic Theory of Measurement
We propose a logic to reason about data collected by a num- ber of measurement systems. The semantic of this logic is grounded on the epistemic theory of measurement that gives a central role to measure- ment devices and calibration. In this perspective, the lack of evidences (in the available data) for the truth or falsehood of a proposition requires the introduction of a third truth-value (the undetermined). Moreover, the data collected by a given source are here represented by means of a possible world, which provide a contextual view on the objects in the domain. We approach (possibly) conflicting data coming from different sources in a social choice theoretic fashion: we investigate viable opera- tors to aggregate data and we represent them in our logic by means of suitable (minimal) modal operators
Recommended from our members
Using ontologies to enhance human understandability of global post-hoc explanations of black-box models
The interest in explainable artificial intelligence has grown strongly in recent years because of the need to convey safety and trust in the ‘how’ and ‘why’ of automated decision-making to users. While a plethora of approaches has been developed, only a few focus on how to use domain knowledge and how this influences the understanding of explanations by users. In this paper, we show that by using ontologies we can improve the human understandability of global post-hoc explanations, presented in the form of decision trees. In particular, we introduce Trepan Reloaded, which builds on Trepan, an algorithm that extracts surrogate decision trees from black-box models. Trepan Reloaded includes ontologies, that model domain knowledge, in the process of extracting explanations to improve their understandability. We tested the understandability of the extracted explanations by humans in a user study with four different tasks. We evaluate the results in terms of response times and correctness, subjective ease of understanding and confidence, and similarity of free text responses. The results show that decision trees generated with Trepan Reloaded, taking into account domain knowledge, are significantly more understandable throughout than those generated by standard Trepan. The enhanced understandability of post-hoc explanations is achieved with little compromise on the accuracy with which the surrogate decision trees replicate the behaviour of the original neural network models
Building high-quality merged ontologies from multiple sources with requirements customization
Ontologies are the prime way of organizing data in the Semantic Web. Often, it is necessary to combine several, independently developed ontologies to obtain a knowledge graph fully representing a domain of interest. Existing approaches scale rather poorly to the merging of multiple ontologies due to using a binary merge strategy. Thus, we aim to investigate the extent to which the n-ary strategy can solve the scalability problem. This thesis contributes to the following important aspects: 1. Our n-ary merge strategy takes as input a set of source ontologies and their mappings and generates a merged ontology. For efficient processing, rather than successively merging complete ontologies pairwise, we group related concepts across ontologies into partitions and merge first within and then across those partitions. 2. We take a step towards parameterizable merge methods. We have identified a set of Generic Merge Requirements (GMRs) that merged ontologies might be expected to meet. We have investigated and developed compatibilities of the GMRs by a graph-based method. 3. When multiple ontologies are merged, inconsistencies can occur due to different world views encoded in the source ontologies To this end, we propose a novel Subjective Logic-based method to handling the inconsistency occurring while merging ontologies. We apply this logic to rank and estimate the trustworthiness of conflicting axioms that cause inconsistencies within a merged ontology. 4. To assess the quality of the merged ontologies systematically, we provide a comprehensive set of criteria in an evaluation framework. The proposed criteria cover a variety of characteristics of each individual aspect of the merged ontology in structural, functional, and usability dimensions. 5. The final contribution of this research is the development of the CoMerger tool that implements all aforementioned aspects accessible via a unified interface