26 research outputs found

    A Machine Learning Approach for Optimizing Heuristic Decision-making in OWL Reasoners

    Get PDF
    Description Logics (DLs) are formalisms for representing knowledge bases of application domains. TheWeb Ontology Language (OWL) is a syntactic variant of a very expressive description logic. OWL reasoners can infer implied information from OWL ontologies. The performance of OWL reasoners can be severely affected by situations that require decision-making over many alternatives. Such a non-deterministic behavior is often controlled by heuristics that are based on insufficient information. This thesis proposes a novel OWL reasoning approach that applies machine learning (ML) to implement pragmatic and optimal decision-making strategies in such situations. Disjunctions occurring in ontologies are one source of non deterministic actions in reasoners. We propose two ML-based approaches to reduce the non-determinism caused by dealing with disjunctions. The first approach is restricted to propositional description logic while the second one can deal with standard description logic. The first approach builds a logistic regression classifier that chooses a proper branching heuristic for an input ontology. Branching heuristics are first developed to help Propositional Satisfiability (SAT) based solvers with making decisions about which branch to pick in each branching level. The second approach is the developed version of the first approach. An SVM (Support Vector Machine) classier is designed to select an appropriate expansion-ordering heuristic for an input ontology. The built-in heuristics are designed for expansion ordering of satisfiability testing in OWL reasoners. They determine the order for branches in search trees. Both of the above approaches speed up our ML-based reasoner by up to two orders of magnitude in comparison to the non-ML reasoner. Another source of non-deterministic actions is the order in which tableau rules should be applied. On average, our ML-based approach that is an SVM classifier achieves a speedup of two orders of magnitude when compared to the most expensive rule ordering of the non-ML reasoner

    Truth maintenance in knowledge-based systems

    Get PDF
    Truth Maintenance Systems (TMS) have been applied in a wide range of domains, from diagnosing electric circuits to belief revision in agent systems. There also has been work on using the TMS in modern Knowledge-Based Systems such as intelligent agents and ontologies. This thesis investigates the applications of TMSs in such systems. For intelligent agents, we use a “light-weight” TMS to support query caching in agent programs. The TMS keeps track of the dependencies between a query and the facts used to derive it so that when the agent updates its database, only affected queries are invalidated and removed from the cache. The TMS employed here is “light-weight” as it does not maintain all intermediate reasoning results. Therefore, it is able to reduce memory consumption and to improve performance in a dynamic setting such as in multi-agent systems. For ontologies, this work extends the Assumption-based Truth Maintenance System (ATMS) to tackle the problem of axiom pinpointing and debugging in ontology-based systems with different levels of expressivity. Starting with finding all errors in auto-generated ontology mappings using a “classic” ATMS [23], we extend the ATMS to solve the axiom pinpointing problem in Description Logics-based Ontologies. We also attempt this approach to solve the axiom pinpointing problem in a more expressive upper ontology, SUMO, whose underlying logic is undecidable

    Truth maintenance in knowledge-based systems

    Get PDF
    Truth Maintenance Systems (TMS) have been applied in a wide range of domains, from diagnosing electric circuits to belief revision in agent systems. There also has been work on using the TMS in modern Knowledge-Based Systems such as intelligent agents and ontologies. This thesis investigates the applications of TMSs in such systems. For intelligent agents, we use a “light-weight” TMS to support query caching in agent programs. The TMS keeps track of the dependencies between a query and the facts used to derive it so that when the agent updates its database, only affected queries are invalidated and removed from the cache. The TMS employed here is “light-weight” as it does not maintain all intermediate reasoning results. Therefore, it is able to reduce memory consumption and to improve performance in a dynamic setting such as in multi-agent systems. For ontologies, this work extends the Assumption-based Truth Maintenance System (ATMS) to tackle the problem of axiom pinpointing and debugging in ontology-based systems with different levels of expressivity. Starting with finding all errors in auto-generated ontology mappings using a “classic” ATMS [23], we extend the ATMS to solve the axiom pinpointing problem in Description Logics-based Ontologies. We also attempt this approach to solve the axiom pinpointing problem in a more expressive upper ontology, SUMO, whose underlying logic is undecidable

    Proof-theoretic Semantics for Intuitionistic Multiplicative Linear Logic

    Get PDF
    This work is the first exploration of proof-theoretic semantics for a substructural logic. It focuses on the base-extension semantics (B-eS) for intuitionistic multiplicative linear logic (IMLL). The starting point is a review of Sandqvist’s B-eS for intuitionistic propositional logic (IPL), for which we propose an alternative treatment of conjunction that takes the form of the generalized elimination rule for the connective. The resulting semantics is shown to be sound and complete. This motivates our main contribution, a B-eS for IMLL , in which the definitions of the logical constants all take the form of their elimination rule and for which soundness and completeness are established

    Hypertableau Reasoning for Description Logics

    Full text link
    We present a novel reasoning calculus for the description logic SHOIQ^+---a knowledge representation formalism with applications in areas such as the Semantic Web. Unnecessary nondeterminism and the construction of large models are two primary sources of inefficiency in the tableau-based reasoning calculi used in state-of-the-art reasoners. In order to reduce nondeterminism, we base our calculus on hypertableau and hyperresolution calculi, which we extend with a blocking condition to ensure termination. In order to reduce the size of the constructed models, we introduce anywhere pairwise blocking. We also present an improved nominal introduction rule that ensures termination in the presence of nominals, inverse roles, and number restrictions---a combination of DL constructs that has proven notoriously difficult to handle. Our implementation shows significant performance improvements over state-of-the-art reasoners on several well-known ontologies

    Reasoning in description logics using resolution and deductive databases

    Get PDF

    Scaling Up Description Logic Reasoning by Distributed Resolution

    Full text link
    Benefits from structured knowledge representation have motivated the creation of large description logic ontologies. For accessing implicit information and avoiding errors in ontologies, reasoning services are necessary. However, the available reasoning methods suffer from scalability problems as the size of ontologies keeps growing. This thesis investigates a distributed reasoning method that improves scalability by splitting a reasoning process into a set of largely independent subprocesses. In contrast to most description logic reasoners, the proposed approach is based on resolution calculi. We prove that the method is sound and complete for first order logic and different description logic subsets. Evaluation of the implementation shows a heavy decrease of runtime compared to reasoning on a single machine. Hence, the increased computation power pays off the overhead caused by distribution. Dependencies between subprocesses can be kept low enough to allow efficient distribution. Furthermore, we investigate and compare different algorithms for computing the distribution of axioms and provide an optimization of the distributed reasoning method that improves workload balance in a dynamic setting
    corecore