36 research outputs found
Syntactic vs. Semantic Locality: How Good Is a Cheap Approximation?
Extracting a subset of a given OWL ontology that captures all the ontology's
knowledge about a specified set of terms is a well-understood task. This task
can be based, for instance, on locality-based modules (LBMs). These come in two
flavours, syntactic and semantic, and a syntactic LBM is known to contain the
corresponding semantic LBM. For syntactic LBMs, polynomial extraction
algorithms are known, implemented in the OWL API, and being used. In contrast,
extracting semantic LBMs involves reasoning, which is intractable for OWL 2 DL,
and these algorithms had not been implemented yet for expressive ontology
languages. We present the first implementation of semantic LBMs and report on
experiments that compare them with syntactic LBMs extracted from real-life
ontologies. Our study reveals whether semantic LBMs are worth the additional
extraction effort, compared with syntactic LBMs
*-CFQ: Analyzing the Scalability of Machine Learning on a Compositional Task
We present *-CFQ ("star-CFQ"): a suite of large-scale datasets of varying
scope based on the CFQ semantic parsing benchmark, designed for principled
investigation of the scalability of machine learning systems in a realistic
compositional task setting. Using this suite, we conduct a series of
experiments investigating the ability of Transformers to benefit from increased
training size under conditions of fixed computational cost. We show that
compositional generalization remains a challenge at all training sizes, and we
show that increasing the scope of natural language leads to consistently higher
error rates, which are only partially offset by increased training data. We
further show that while additional training data from a related domain improves
the accuracy in data-starved situations, this improvement is limited and
diminishes as the distance from the related domain to the target domain
increases.Comment: Accepted, AAAI-2
Improved Algorithms for Module Extraction and Atomic Decomposition
Abstract. In recent years modules have frequently been used for ontology development and understanding. This happens because a module captures all the knowledge an ontology contains in a given area, and often is much smaller than the whole ontology. One useful modularisation technique for expressive ontology languages is locality-based modularisation, which allows for fast (polynomial) extraction of modules. In order to better understand the modular structure of an ontology, a technique called Atomic Decomposition can be used. It efficiently builds a structure representing all possible modules for an ontology, regardless of the modularisation algorithm adopted and without the need to compute an exponential number of modules, as in a naive approach. This structure may be used e.g., for quick extraction of modules, or to investigate dependencies between modules, and so on. However, existing algorithms for both locality-based module extraction and atomic decomposition do not scale well. This happens mainly because of their global nature: each iteration always explores the whole ontology, even when it is not necessary. We propose algorithms for locality-based module extraction and atomic decomposition that work only on the relevant part of the ontology. This improves performance of algorithms by avoiding unnecessary checks. Empirical evaluation confirms a significant speed up on real-life ontologies.
Efficient Reasoning with Range and Domain Constraints
We show how a tableaux algorithm for can be extended to support role boxes that include range and domain axioms, prove that the extended algorithm is still a decision procedure for the satisfiability and subsumption of concepts w.r.t. such a role box, and show how support for range and domian axioms can be exploited in order to add a new form of absorption optimisation called role absorption. We illustrate the effectiveness of the optimised algorithm by analysing the perfomance of our FaCT++ implementation when classifying terminologies derived from realistic ontologies
DL Reasoner vs. First-Order Prover
We compare the performance of a DL reasoner with a FO prover on reasoning problems encountered during the classification of realistic knowledge bases
Optimised classification for taxonomic knowledge bases
Many legacy ontologies are now being translated into Description Logic (DL) based ontology languages in order to take advantage of DL based tools and reasoning services. The resulting DL Knowledge Bases (KBs) are typically of large size, but have a very simple structure, i.e., they consist mainly of shallow taxonomies. The classification algorithms used in state-of-the-art DL reasoners may not deal well with such KBs In this paper we propose an optimisation which dramatically speeds-up classification for such KBs.