8,325 research outputs found
Computing Possible and Certain Answers over Order-Incomplete Data
This paper studies the complexity of query evaluation for databases whose
relations are partially ordered; the problem commonly arises when combining or
transforming ordered data from multiple sources. We focus on queries in a
useful fragment of SQL, namely positive relational algebra with aggregates,
whose bag semantics we extend to the partially ordered setting. Our semantics
leads to the study of two main computational problems: the possibility and
certainty of query answers. We show that these problems are respectively
NP-complete and coNP-complete, but identify tractable cases depending on the
query operators or input partial orders. We further introduce a duplicate
elimination operator and study its effect on the complexity results.Comment: 55 pages, 56 references. Extended journal version of
arXiv:1707.07222. Up to the stylesheet, page/environment numbering, and
possible minor publisher-induced changes, this is the exact content of the
journal paper that will appear in Theoretical Computer Scienc
Coherent Integration of Databases by Abductive Logic Programming
We introduce an abductive method for a coherent integration of independent
data-sources. The idea is to compute a list of data-facts that should be
inserted to the amalgamated database or retracted from it in order to restore
its consistency. This method is implemented by an abductive solver, called
Asystem, that applies SLDNFA-resolution on a meta-theory that relates
different, possibly contradicting, input databases. We also give a pure
model-theoretic analysis of the possible ways to `recover' consistent data from
an inconsistent database in terms of those models of the database that exhibit
as minimal inconsistent information as reasonably possible. This allows us to
characterize the `recovered databases' in terms of the `preferred' (i.e., most
consistent) models of the theory. The outcome is an abductive-based application
that is sound and complete with respect to a corresponding model-based,
preferential semantics, and -- to the best of our knowledge -- is more
expressive (thus more general) than any other implementation of coherent
integration of databases
Incremental Interpretation: Applications, Theory, and Relationship to Dynamic Semantics
Why should computers interpret language incrementally? In recent years
psycholinguistic evidence for incremental interpretation has become more and
more compelling, suggesting that humans perform semantic interpretation before
constituent boundaries, possibly word by word. However, possible computational
applications have received less attention. In this paper we consider various
potential applications, in particular graphical interaction and dialogue. We
then review the theoretical and computational tools available for mapping from
fragments of sentences to fully scoped semantic representations. Finally, we
tease apart the relationship between dynamic semantics and incremental
interpretation.Comment: Procs. of COLING 94, LaTeX (2.09 preferred), 8 page
StemNet: An Evolving Service for Knowledge Networking in the Life Sciences
Up until now, crucial life science information resources, whether bibliographic or factual databases, are isolated from each other. Moreover, semantic metadata intended to structure their contents is supplied in a manual form only. In the StemNet project we aim at developing a framework for semantic interoperability for these resources. This will facilitate the extraction of relevant information from textual sources and the generation of semantic metadata in a fully automatic manner. In this way, (from a computational perspective) unstructured life science documents are linked to structured biological fact databases, in particular to the identifiers of genes, proteins, etc. Thus, life scientists will be able to seamlessly access information from a homogeneous platform, despite the fact that the original information was unlinked and scattered over the whole variety of heterogeneous life science information resources and, therefore, almost inaccessible for integrated systematic search by academic, clinical, or industrial users
The DLV System for Knowledge Representation and Reasoning
This paper presents the DLV system, which is widely considered the
state-of-the-art implementation of disjunctive logic programming, and addresses
several aspects. As for problem solving, we provide a formal definition of its
kernel language, function-free disjunctive logic programs (also known as
disjunctive datalog), extended by weak constraints, which are a powerful tool
to express optimization problems. We then illustrate the usage of DLV as a tool
for knowledge representation and reasoning, describing a new declarative
programming methodology which allows one to encode complex problems (up to
-complete problems) in a declarative fashion. On the foundational
side, we provide a detailed analysis of the computational complexity of the
language of DLV, and by deriving new complexity results we chart a complete
picture of the complexity of this language and important fragments thereof.
Furthermore, we illustrate the general architecture of the DLV system which
has been influenced by these results. As for applications, we overview
application front-ends which have been developed on top of DLV to solve
specific knowledge representation tasks, and we briefly describe the main
international projects investigating the potential of the system for industrial
exploitation. Finally, we report about thorough experimentation and
benchmarking, which has been carried out to assess the efficiency of the
system. The experimental results confirm the solidity of DLV and highlight its
potential for emerging application areas like knowledge management and
information integration.Comment: 56 pages, 9 figures, 6 table
- …