254 research outputs found

    Personalizable Knowledge Integration

    Get PDF
    Large repositories of data are used daily as knowledge bases (KBs) feeding computer systems that support decision making processes, such as in medical or financial applications. Unfortunately, the larger a KB is, the harder it is to ensure its consistency and completeness. The problem of handling KBs of this kind has been studied in the AI and databases communities, but most approaches focus on computing answers locally to the KB, assuming there is some single, epistemically correct solution. It is important to recognize that for some applications, as part of the decision making process, users consider far more knowledge than that which is contained in the knowledge base, and that sometimes inconsistent data may help in directing reasoning; for instance, inconsistency in taxpayer records can serve as evidence of a possible fraud. Thus, the handling of this type of data needs to be context-sensitive, creating a synergy with the user in order to build useful, flexible data management systems. Inconsistent and incomplete information is ubiquitous and presents a substantial problem when trying to reason about the data: how can we derive an adequate model of the world, from the point of view of a given user, from a KB that may be inconsistent or incomplete? In this thesis we argue that in many cases users need to bring their application-specific knowledge to bear in order to inform the data management process. Therefore, we provide different approaches to handle, in a personalized fashion, some of the most common issues that arise in knowledge management. Specifically, we focus on (1) inconsistency management in relational databases, general knowledge bases, and a special kind of knowledge base designed for news reports; (2) management of incomplete information in the form of different types of null values; and (3) answering queries in the presence of uncertain schema matchings. We allow users to define policies to manage both inconsistent and incomplete information in their application in a way that takes both the user's knowledge of his problem, and his attitude to error/risk, into account. Using the frameworks and tools proposed here, users can specify when and how they want to manage/solve the issues that arise due to inconsistency and incompleteness in their data, in the way that best suits their needs

    A Rational and Efficient Algorithm for View Revision in Databases

    Full text link
    The dynamics of belief and knowledge is one of the major components of any autonomous system that should be able to incorporate new pieces of information. In this paper, we argue that to apply rationality result of belief dynamics theory to various practical problems, it should be generalized in two respects: first of all, it should allow a certain part of belief to be declared as immutable; and second, the belief state need not be deductively closed. Such a generalization of belief dynamics, referred to as base dynamics, is presented, along with the concept of a generalized revision algorithm for Horn knowledge bases. We show that Horn knowledge base dynamics has interesting connection with kernel change and abduction. Finally, we also show that both variants are rational in the sense that they satisfy certain rationality postulates stemming from philosophical works on belief dynamics

    Grundlagen der Anfrageverarbeitung beim relationalen Datenaustausch

    Get PDF
    Relational data exchange deals with translating relational data according to a given specification. This problem is one of the many tasks that arise in data integration, for example, in data restructuring, in ETL (Extract-Transform-Load) processes used for updating data warehouses, or in data exchange between different, possibly independently created, applications. Systems for relational data exchange exist for several decades now. Motivated by their experiences with one of those systems, Fagin, Kolaitis, Miller, and Popa (2003) studied fundamental and algorithmic issues arising in relational data exchange. One of these issues is how to answer queries that are posed against the target schema (i.e., against the result of the data exchange) so that the answers are consistent with the source data. For monotonic queries, the certain answers semantics proposed by Fagin, Kolaitis, Miller, and Popa (2003) is appropriate. For many non-monotonic queries, however, the certain answers semantics was shown to yield counter-intuitive results. This thesis deals with computing the certain answers for monotonic queries on the one hand, and on the other hand, it deals with the issue of which semantics are appropriate for answering non-monotonic queries, and how hard it is to evaluate non-monotonic queries under these semantics. As shown by Fagin, Kolaitis, Miller, and Popa (2003), computing the certain answers for unions of conjunctive queries - a subclass of the monotonic queries - basically reduces to computing universal solutions, provided the data transformation is specified by a set of tgds (tuple-generating dependencies) and egds (equality-generating dependencies). If M is such a specification and S is a source database, then T is called a solution for S under M if T is a possible result of translating S according to M. Intuitively, universal solutions are most general solutions. Since the above-mentioned work by Fagin, Kolaitis, Miller, and Popa it was unknown whether it is decidable if a source database has a universal solution under a given data exchange specification. In this thesis, we show that this problem is undecidable. More precisely, we construct a specification M that consists of tgds only so that it is undecidable whether a given source database has a universal solution under M. From the proof it also follows that it is undecidable whether the chase procedure - by which universal models can be obtained - terminates on a given source database and the set of tgds in M. The above results in particular strengthen results of Deutsch, Nash, and Remmel (2008). Concerning the issue of which semantics are appropriate for answering non-monotonic queries, we study several semantics for answering such queries. All of these semantics are based on the closed world assumption (CWA). First, the CWA-semantics of Libkin (2006) are extended so that they can be applied to specifications consisting of tgds and egds. The key is to extend the concept of CWA-solution, on which the CWA-semantics are based. CWA-solutions are characterized as universal solutions that are derivable from the source database using a suitably controlled version of the chase procedure. In particular, if CWA-solutions exist, then there is a minimal CWA-solution that is unique up to isomorphism: the core of the universal solutions introduced by Fagin, Kolaitis, and Popa (2003). We show that evaluation of a query under some of the CWA-semantics reduces to computing the certain answers to the query on the minimal CWA-solution. The CWA-semantics resolve some the known problems with answering non-monotonic queries. There are, however, two natural properties that are not possessed by the CWA-semantics. On the one hand, queries may be answered differently with respect to data exchange specifications that are logically equivalent. On the other hand, there are queries whose answer under the CWA-semantics intuitively contradicts the information derivable from the source database and the data exchange specification. To find an alternative semantics, we first test several CWA-based semantics from the area of deductive databases for their suitability regarding non-monotonic query answering in relational data exchange. More precisely, we focus on the CWA-semantics by Reiter (1978), the GCWA-semantics (Minker 1982), the EGCWA-semantics (Yahya, Henschen 1985) and the PWS-semantics (Chan 1993). It turns out that these semantics are either too weak or too strong, or do not possess the desired properties. Finally, based on the GCWA-semantics we develop the GCWA*-semantics which intuitively possesses the desired properties. For monotonic queries, some of the CWA-semantics as well as the GCWA*-semantics coincide with the certain answers semantics, that is, results obtained for the certain answers semantics carry over to those semantics. When studying the complexity of evaluating non-monotonic queries under the above-mentioned semantics, we focus on the data complexity, that is, the complexity when the data exchange specification and the query are fixed. We show that in many cases, evaluating non-monotonic queries is hard: co-NP- or NP-complete, or even undecidable. For example, evaluating conjunctive queries with at least one negative literal under simple specifications may be co-NP-hard. Notice, however, that this result only says that there is such a query and such a specification for which the problem is hard, but not that the problem is hard for all such queries and specifications. On the other hand, we identify a broad class of queries - the class of universal queries - which can be evaluated in polynomial time under the GCWA*-semantics, provided the data exchange specification is suitably restricted. More precisely, we show that universal queries can be evaluated on the core of the universal solutions, independent of the source database and the specification.Beim relationalen Datenaustausch geht es um die Transformation relationaler Daten gemäß einer vorgegebenen Spezifikation. Dieses Problem ist eines der vielen Probleme, die bei der Informationsintegration anfallen, und unterliegt Anwendungen wie der Datenrestrukturierung, dem Austausch von Daten zwischen unabhängig voneinander entwickelten Anwendungen und der Aktualisierung von Datenwarenhäusern mittels ETL. Systeme für den relationalen Datenaustausch existieren bereits seit einiger Zeit. Motiviert durch die Erfahrungen mit solch einem System haben sich Fagin, Kolaitis, Miller und Popa (2003) genauer mit grundlegenden und algorithmischen Fragestellungen zum relationalen Datenaustausch auseinandergesetzt. Eine dieser Fragestellungen ist, wie Anfragen über dem Zielschema (d.h. Anfragen an das Resultat des Datenaustauschs) beantwortet werden können, so dass die Antworten semantisch konsistent mit den Eingabedaten sind. Für monotone Anfragen ist die von Fagin, Kolaitis, Miller und Popa (2003) vorgestellte Sichere Antworten-Semantik gut geeignet. Für viele nicht-monotone Anfragen liefert sie jedoch unnatürliche Antworten. Die vorliegende Dissertation beschäftigt sich zum Einen mit der Berechnung der sicheren Antworten für monotone Anfragen und zum Anderen mit der Problematik, was geeignete Semantiken für nicht-monotone Anfragen sind und wie schwer es ist, nicht-monotone Anfragen unter diesen Semantiken auszuwerten. Die Berechnung der sicheren Antworten für Vereinigungen konjunktiver Anfragen - einer Teilklasse der monotonen Anfragen - reduziert sich nach Fagin, Kolaitis, Miller und Popa (2003) im Wesentlichen auf die Berechnung universeller Lösungen, wenn die Datentransformation durch eine Menge so genannter tgds (engl. tuple-generating dependencies) und egds (engl. equality-generating dependencies) spezifiziert wurde. Wenn M solch eine Spezifikation und S eine Quelldatenbank ist, so nennt man T eine Lösung für S unter M, wenn T ein mögliches Resultat der Transformation von S bezüglich M ist. Universelle Lösungen sind intuitiv allgemeinste Lösungen. Seit der oben genannten Arbeit von Fagin, Kolaitis, Miller und Popa war unbekannt, ob die Existenz universeller Lösungen für eine gegebene Quelldatenbank entscheidbar ist. In der vorliegenden Dissertation wird gezeigt, dass dieses Problem unentscheidbar ist. Genauer wird gezeigt, dass es bereits eine feste Spezifikation M mittels tgds gibt, so dass unentscheidbar ist, ob eine gegebene Quelldatenbank unter M eine universelle Lösung besitzt. Nebenbei folgt aus dem Beweis, dass das Problem, ob die zur Berechnung universeller Lösungen eingesetzte Chase-Prozedur für die Menge der tgds in M bei gegebener Quelldatenbank terminiert, unentscheidbar ist. Die oben genannten Resultate verstärken insbesondere Ergebnisse von Deutsch, Nash und Remmel (2008). Zu der Frage, was geeignete Semantiken für nicht-monotone Anfragen sind, werden verschiedene Semantiken für nicht-monotone Anfragen untersucht. All diese Semantiken basieren auf der so genannten Closed World Assumption (CWA). Zunächst werden die von Libkin (2006) eingeführten CWA-Semantiken so erweitert, dass diese auf Spezifikationen durch tgds und egds anwendbar sind. Der Schlüssel dazu ist die Erweiterung des Konzeptes der CWA-Lösungen, auf dem die CWA-Semantiken basieren. CWA-Lösungen werden als universelle Lösungen charakterisiert, die durch eine spezielle Variante der Chase-Prozedur aus einer Quelldatenbank abgeleitet werden können. Insbesondere gibt es eine bis auf Isomorphie eindeutige minimale CWA-Lösung (falls mindestens eine CWA-Lösung existiert): den von Fagin, Kolaitis und Popa (2003) eingeführten Kern der universellen Lösungen. Die Auswertung von Anfragen unter einigen der CWA-Semantiken lassen sich auf die Berechnung der sicheren Antworten der Anfrage auf einer solchen minimalen CWA-Lösung reduzieren. Die CWA-Semantik beseitigt einige der bekannten Probleme bei der Beantwortung nicht-monotoner Anfragen. Es gibt jedoch zwei natürliche Eigenschaften, die die CWA-Semantiken nicht besitzen. Zum Einen werden Anfragen unter logisch äquivalenten Spezifikationen nicht notwendigerweise gleich beantwortet. Des Weiteren gibt es Anfragen, deren Antwort unter den CWA-Semantiken intuitiv den aus der Quelldatenbank und der Spezifikation ableitbaren Information widerspricht. Um eine alternative Semantik zu finden, werden zuerst verschiedene CWA-basierte Semantiken aus dem Bereich der deduktiven Datenbanken betrachtet und auf ihre Tauglichkeit zur Beantwortung nicht-monotoner Anfragen im relationalen Datenaustausch untersucht. Genauer konzentrieren wir uns hier auf die CWA-Semantik von Reiter (1978), die GCWA-Semantik (Minker 1982), die EGCWA-Semantik (Yahya, Henschen 1985) und die PWS-Semantik (Chan 1993). Es stellt sich heraus, dass diese Semantiken zu stark oder zu schwach sind bzw. nicht die erforderlichen Eigenschaften aufweisen. Schließlich wird basierend auf der GCWA-Semantik die GCWA*-Semantik entwickelt, die intuitiv die gewünschten Eigenschaften besitzt. Für monotone Anfragen stimmen einige der CWA-Semantiken sowie die GCWA*-Semantik mit der Sicheren Antworten-Semantik überein, d.h. Resultate für die Sichere Antworten-Semantik gehen auf diese Semantiken über. Bei der Frage, wie schwer es ist, nicht-monotone Anfragen unter den oben angesprochenen Semantiken auszuwerten, konzentrieren wir uns auf die Datenkomplexität, d.h. die Komplexität bei fester Spezifikation und Anfrage. Wir zeigen, dass die Auswertung nicht-monotoner Anfragen in vielen Fällen sehr schwierig ist: co-NP- bzw. NP-schwer bzw. sogar unentscheidbar in der Datenkomplexität. So kann z.B. die Auswertung konjunktiver Anfragen mit nur einem zusätzlichen negativen Literal unter bereits sehr einfachen Spezifikationen co-NP-hart sein. Man beachte, dass dieses Resultat besagt, dass es eine schwierige Anfrage und eine schwierige Spezifikation gibt, jedoch nicht, dass alle solchen Anfragen und Spezifikationen schwer sind. Auf der anderen Seite identifizieren wir eine größere Klasse von Anfragen - die so genannten universellen Anfragen -, die sich unter der GCWA*-Semantik in Polynomialzeit auswerten lassen, wenn die Spezifikation der Datentransformation genügend eingeschränkt ist. Präziser wird gezeigt, dass universelle Anfragen unabhängig von der (genügend eingeschränkten) Spezifikation und der Quelldatenbank auf dem Kern der universellen Lösungen in Polynomialzeit auswertet werden können, auf dem auch eine Vielzahl anderer Anfragen ausgewertet werden können

    Storing and Querying Probabilistic XML Using a Probabilistic Relational DBMS

    Get PDF
    This work explores the feasibility of storing and querying probabilistic XML in a probabilistic relational database. Our approach is to adapt known techniques for mapping XML to relational data such that the possible worlds are preserved. We show that this approach can work for any XML-to-relational technique by adapting a representative schema-based (inlining) as well as a representative schemaless technique (XPath Accelerator). We investigate the maturity of probabilistic rela- tional databases for this task with experiments with one of the state-of- the-art systems, called Trio

    Datalog± Ontology Consolidation

    Get PDF
    Knowledge bases in the form of ontologies are receiving increasing attention as they allow to clearly represent both the available knowledge, which includes the knowledge in itself and the constraints imposed to it by the domain or the users. In particular, Datalog ± ontologies are attractive because of their property of decidability and the possibility of dealing with the massive amounts of data in real world environments; however, as it is the case with many other ontological languages, their application in collaborative environments often lead to inconsistency related issues. In this paper we introduce the notion of incoherence regarding Datalog± ontologies, in terms of satisfiability of sets of constraints, and show how under specific conditions incoherence leads to inconsistent Datalog ± ontologies. The main contribution of this work is a novel approach to restore both consistency and coherence in Datalog± ontologies. The proposed approach is based on kernel contraction and restoration is performed by the application of incision functions that select formulas to delete. Nevertheless, instead of working over minimal incoherent/inconsistent sets encountered in the ontologies, our operators produce incisions over non-minimal structures called clusters. We present a construction for consolidation operators, along with the properties expected to be satisfied by them. Finally, we establish the relation between the construction and the properties by means of a representation theorem. Although this proposal is presented for Datalog± ontologies consolidation, these operators can be applied to other types of ontological languages, such as Description Logics, making them apt to be used in collaborative environments like the Semantic Web.Fil: Deagustini, Cristhian Ariel David. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Bahía Blanca. Instituto de Ciencias e Ingeniería de la Computación. Universidad Nacional del Sur. Departamento de Ciencias e Ingeniería de la Computación. Instituto de Ciencias e Ingeniería de la Computación; ArgentinaFil: Martinez, Maria Vanina. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Bahía Blanca. Instituto de Ciencias e Ingeniería de la Computación. Universidad Nacional del Sur. Departamento de Ciencias e Ingeniería de la Computación. Instituto de Ciencias e Ingeniería de la Computación; ArgentinaFil: Falappa, Marcelo Alejandro. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Bahía Blanca. Instituto de Ciencias e Ingeniería de la Computación. Universidad Nacional del Sur. Departamento de Ciencias e Ingeniería de la Computación. Instituto de Ciencias e Ingeniería de la Computación; ArgentinaFil: Simari, Guillermo Ricardo. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Bahía Blanca. Instituto de Ciencias e Ingeniería de la Computación. Universidad Nacional del Sur. Departamento de Ciencias e Ingeniería de la Computación. Instituto de Ciencias e Ingeniería de la Computación; Argentin

    Answering Non-Monotonic Queries in Relational Data Exchange

    Full text link
    Relational data exchange is the problem of translating relational data from a source schema into a target schema, according to a specification of the relationship between the source data and the target data. One of the basic issues is how to answer queries that are posed against target data. While consensus has been reached on the definitive semantics for monotonic queries, this issue turned out to be considerably more difficult for non-monotonic queries. Several semantics for non-monotonic queries have been proposed in the past few years. This article proposes a new semantics for non-monotonic queries, called the GCWA*-semantics. It is inspired by semantics from the area of deductive databases. We show that the GCWA*-semantics coincides with the standard open world semantics on monotonic queries, and we further explore the (data) complexity of evaluating non-monotonic queries under the GCWA*-semantics. In particular, we introduce a class of schema mappings for which universal queries can be evaluated under the GCWA*-semantics in polynomial time (data complexity) on the core of the universal solutions.Comment: 55 pages, 3 figure

    Cleaning data with Llunatic

    Get PDF
    Data cleaning (or data repairing) is considered a crucial problem in many database-related tasks. It consists in making a database consistent with respect to a given set of constraints. In recent years, repairing methods have been proposed for several classes of constraints. These methods, however, tend to hard-code the strategy to repair conflicting values and are specialized toward specific classes of constraints. In this paper, we develop a general chase-based repairing framework, referred to as Llunatic, in which repairs can be obtained for a large class of constraints and by using different strategies to select preferred values. The framework is based on an elegant formalization in terms of labeled instances and partially ordered preference labels. In this context, we revisit concepts such as upgrades, repairs and the chase. In Llunatic, various repairing strategies can be slotted in, without the need for changing the underlying implementation. Furthermore, Llunatic is the first data repairing system which is DBMS-based. We report experimental results that confirm its good scalability and show that various instantiations of the framework result in repairs of good quality
    corecore