6,568 research outputs found

    Description Logics with Concrete Domains and Functional Dependencies

    Get PDF
    Description Logics (DLs) with concrete domains are a useful tool in many applications. To further enhance the expressive power of such DLs, it has been proposed to add database-style key constraints. Up to now, however, only uniqueness constraints have been considered in this context, thus neglecting the second fundamental family of key constraints: functional dependencies. In this paper, we consider the basic DL with concrete domains ALC(D), extend it with functional dependencies, and analyze the impact of this extension on the decidability and complexity of reasoning. Though intuitively the expressivity of functional dependencies seems weaker than that of uniqueness constraints, we are able to show that the former have a similarly severe impact on the computational properties: reasoning is undecidable in the general case, and NExpTime-complete in some slightly restricted variants of our logic

    Functional Dependencies in OWL ABox

    Get PDF
    Functional Dependency (FD) has been extensively studied in database theory. Most recently there have been some works investigating the implications of extending Description Logics with functional dependencies. In particular the OWL ontology language offers the functional property property allowing simple functional dependency to be specified. As it turns out, more complex FD specified as concept constructors has been proved to lead to undecidability in the general case, which restricts its usage as part of TBOX. This paper departs from previous ones by restricting FDs applicability to instances in the ABOX. We specify FD as a new constructor, an OWL concept. FD instances are mapped to Horn clauses and evaluated against the ABOX according to user’s desired behavior. The latter allows users to determine whether FDs should be interpreted as constraints, assertions or views. Our approach gives ontology users data guarantees usually found in databases, integrated with the ontology conceptual model

    The VEX-93 environment as a hybrid tool for developing knowledge systems with different problem solving techniques

    Get PDF
    The paper describes VEX-93 as a hybrid environment for developing knowledge-based and problem solver systems. It integrates methods and techniques from artificial intelligence, image and signal processing and data analysis, which can be mixed. Two hierarchical levels of reasoning contains an intelligent toolbox with one upper strategic inference engine and four lower ones containing specific reasoning models: truth-functional (rule-based), probabilistic (causal networks), fuzzy (rule-based) and case-based (frames). There are image/signal processing-analysis capabilities in the form of programming languages with more than one hundred primitive functions. User-made programs are embeddable within knowledge basis, allowing the combination of perception and reasoning. The data analyzer toolbox contains a collection of numerical classification, pattern recognition and ordination methods, with neural network tools and a data base query language at inference engines's disposal. VEX-93 is an open system able to communicate with external computer programs relevant to a particular application. Metaknowledge can be used for elaborate conclusions, and man-machine interaction includes, besides windows and graphical interfaces, acceptance of voice commands and production of speech output. The system was conceived for real-world applications in general domains, but an example of a concrete medical diagnostic support system at present under completion as a cuban-spanish project is mentioned. Present version of VEX-93 is a huge system composed by about one and half millions of lines of C code and runs in microcomputers under Windows 3.1.Postprint (published version

    Towards MKM in the Large: Modular Representation and Scalable Software Architecture

    Full text link
    MKM has been defined as the quest for technologies to manage mathematical knowledge. MKM "in the small" is well-studied, so the real problem is to scale up to large, highly interconnected corpora: "MKM in the large". We contend that advances in two areas are needed to reach this goal. We need representation languages that support incremental processing of all primitive MKM operations, and we need software architectures and implementations that implement these operations scalably on large knowledge bases. We present instances of both in this paper: the MMT framework for modular theory-graphs that integrates meta-logical foundations, which forms the base of the next OMDoc version; and TNTBase, a versioned storage system for XML-based document formats. TNTBase becomes an MMT database by instantiating it with special MKM operations for MMT.Comment: To appear in The 9th International Conference on Mathematical Knowledge Management: MKM 201

    Designing Normative Theories for Ethical and Legal Reasoning: LogiKEy Framework, Methodology, and Tool Support

    Full text link
    A framework and methodology---termed LogiKEy---for the design and engineering of ethical reasoners, normative theories and deontic logics is presented. The overall motivation is the development of suitable means for the control and governance of intelligent autonomous systems. LogiKEy's unifying formal framework is based on semantical embeddings of deontic logics, logic combinations and ethico-legal domain theories in expressive classic higher-order logic (HOL). This meta-logical approach enables the provision of powerful tool support in LogiKEy: off-the-shelf theorem provers and model finders for HOL are assisting the LogiKEy designer of ethical intelligent agents to flexibly experiment with underlying logics and their combinations, with ethico-legal domain theories, and with concrete examples---all at the same time. Continuous improvements of these off-the-shelf provers, without further ado, leverage the reasoning performance in LogiKEy. Case studies, in which the LogiKEy framework and methodology has been applied and tested, give evidence that HOL's undecidability often does not hinder efficient experimentation.Comment: 50 pages; 10 figure

    Inconsistency-tolerant Query Answering in Ontology-based Data Access

    Get PDF
    Ontology-based data access (OBDA) is receiving great attention as a new paradigm for managing information systems through semantic technologies. According to this paradigm, a Description Logic ontology provides an abstract and formal representation of the domain of interest to the information system, and is used as a sophisticated schema for accessing the data and formulating queries over them. In this paper, we address the problem of dealing with inconsistencies in OBDA. Our general goal is both to study DL semantical frameworks that are inconsistency-tolerant, and to devise techniques for answering unions of conjunctive queries under such inconsistency-tolerant semantics. Our work is inspired by the approaches to consistent query answering in databases, which are based on the idea of living with inconsistencies in the database, but trying to obtain only consistent information during query answering, by relying on the notion of database repair. We first adapt the notion of database repair to our context, and show that, according to such a notion, inconsistency-tolerant query answering is intractable, even for very simple DLs. Therefore, we propose a different repair-based semantics, with the goal of reaching a good compromise between the expressive power of the semantics and the computational complexity of inconsistency-tolerant query answering. Indeed, we show that query answering under the new semantics is first-order rewritable in OBDA, even if the ontology is expressed in one of the most expressive members of the DL-Lite family

    Description Logics as Ontology Languages for the Semantic Web

    Get PDF
    The vision of a Semantic Web has recently drawn considerable attention, both from academia and industry. Description logics are often named as one of the tools that can support the Semantic Web and thus help to make this vision reality. In this paper, we describe what description logics are and what they can do for the Semantic Web. Descriptions logics are very useful for defining, integrating, and maintaining ontologies, which provide the Semantic Web with a common understanding of the basic semantic concepts used to annotate Web pages. We also argue that, without the last decade of basic research in this area, description logics could not play such an important rˆole in this domain

    CONFIGEN: A tool for managing configuration options

    Full text link
    This paper introduces CONFIGEN, a tool that helps modularizing software. CONFIGEN allows the developer to select a set of elementary components for his software through an interactive interface. Configuration files for use by C/assembly code and Makefiles are then automatically generated, and we successfully used it as a helper tool for complex system software refactoring. CONFIGEN is based on propositional logic, and its implementation faces hard theoretical problems.Comment: In Proceedings LoCoCo 2010, arXiv:1007.083
    • …
    corecore