521 research outputs found
An algebraic formulation of the aggregative closure query
AbstractThe aggregative closure problem, a transitive closure problem with aggregations on transitive paths, is formally defined by database terms. Its definition in our paper holds only on the subset conditions of path algebra, thereby it is more general than other definitions in previous works. For the completion of the definition, we suggest conditions for the existence of the fixpoint and classified the conditions as the properties of the aggregate operators and the problem domain. So we can verify the existence of the fixpoint by the suggested conditions. The naive algorithm is proposed as a computational semantics for the aggregative closure problem. This study also proves that for an aggregative closure problem the semi-naive algorithm is computationally equivalent to the naive algorithm when the aggregate product operator is distributive over aggregate sum operator
Knowledge Rich Natural Language Queries over Structured Biological Databases
Increasingly, keyword, natural language and NoSQL queries are being used for
information retrieval from traditional as well as non-traditional databases
such as web, document, image, GIS, legal, and health databases. While their
popularity are undeniable for obvious reasons, their engineering is far from
simple. In most part, semantics and intent preserving mapping of a well
understood natural language query expressed over a structured database schema
to a structured query language is still a difficult task, and research to tame
the complexity is intense. In this paper, we propose a multi-level
knowledge-based middleware to facilitate such mappings that separate the
conceptual level from the physical level. We augment these multi-level
abstractions with a concept reasoner and a query strategy engine to dynamically
link arbitrary natural language querying to well defined structured queries. We
demonstrate the feasibility of our approach by presenting a Datalog based
prototype system, called BioSmart, that can compute responses to arbitrary
natural language queries over arbitrary databases once a syntactic
classification of the natural language query is made
Computable queries for relational data bases
AbstractThe concept of “reasonable” queries on relational data bases is investigated. We provide an abstract characterization of the class of queries which are computable, and define the completeness of a query language as the property of being precisely powerful enough to express the queries in this class. This definition is then compared with other proposals for measuring the power of query languages. Our main result is the completeness of a simple programming language which can be thought of as consisting of the relational algebra augmented with the power of iteration
Querying the Unary Negation Fragment with Regular Path Expressions
The unary negation fragment of first-order logic (UNFO) has recently been proposed as a generalization of modal logic that shares many of its good computational and model-theoretic properties. It is attractive from the perspective of database theory because it can express conjunctive queries (CQs) and ontologies formulated in many description logics (DLs). Both are relevant for ontology-mediated querying and, in fact, CQ evaluation under UNFO ontologies (and thus also under DL ontologies) can be `expressed\u27 in UNFO as a satisfiability problem. In this paper, we consider the natural extension of UNFO with regular expressions on binary relations. The resulting logic UNFOreg can express (unions of) conjunctive two-way regular path queries (C2RPQs) and ontologies formulated in DLs that include transitive roles and regular expressions on roles. Our main results are that evaluating C2RPQs under UNFOreg ontologies is decidable, 2ExpTime-complete in combined complexity, and coNP-complete in data complexity, and that satisfiability in UNFOreg is 2ExpTime-complete, thus not harder than in UNFO
Logic and the Challenge of Computer Science
https://deepblue.lib.umich.edu/bitstream/2027.42/154161/1/39015099114889.pd
Node Query Preservation for Deterministic Linear Top-Down Tree Transducers
This paper discusses the decidability of node query preservation problems for
XML document transformations. We assume a transformation given by a
deterministic linear top-down data tree transducer (abbreviated as DLT^V) and
an n-ary query based on runs of a tree automaton. We say that a DLT^V Tr
strongly preserves a query Q if there is a query Q' such that for every
document t, the answer set of Q' for Tr(t) is equal to the answer set of Q for
t. Also we say that Tr weakly preserves Q if there is a query Q' such that for
every t_d in the range of Tr, the answer set of Q' for t_d is equal to the
union of the answer set of Q for t such that t_d = Tr(t). We show that the weak
preservation problem is coNP-complete and the strong preservation problem is in
2-EXPTIME.Comment: In Proceedings TTATT 2013, arXiv:1311.505
The posterity of Zadeh's 50-year-old paper: A retrospective in 101 Easy Pieces – and a Few More
International audienceThis article was commissioned by the 22nd IEEE International Conference of Fuzzy Systems (FUZZ-IEEE) to celebrate the 50th Anniversary of Lotfi Zadeh's seminal 1965 paper on fuzzy sets. In addition to Lotfi's original paper, this note itemizes 100 citations of books and papers deemed “important (significant, seminal, etc.)” by 20 of the 21 living IEEE CIS Fuzzy Systems pioneers. Each of the 20 contributors supplied 5 citations, and Lotfi's paper makes the overall list a tidy 101, as in “Fuzzy Sets 101”. This note is not a survey in any real sense of the word, but the contributors did offer short remarks to indicate the reason for inclusion (e.g., historical, topical, seminal, etc.) of each citation. Citation statistics are easy to find and notoriously erroneous, so we refrain from reporting them - almost. The exception is that according to Google scholar on April 9, 2015, Lotfi's 1965 paper has been cited 55,479 times
Remove-Win: a Design Framework for Conflict-free Replicated Data Collections
Internet-scale distributed systems often replicate data within and across
data centers to provide low latency and high availability despite node and
network failures. Replicas are required to accept updates without coordination
with each other, and the updates are then propagated asynchronously. This
brings the issue of conflict resolution among concurrent updates, which is
often challenging and error-prone. The Conflict-free Replicated Data Type
(CRDT) framework provides a principled approach to address this challenge.
This work focuses on a special type of CRDT, namely the Conflict-free
Replicated Data Collection (CRDC), e.g. list and queue. The CRDC can have
complex and compound data items, which are organized in structures of rich
semantics. Complex CRDCs can greatly ease the development of upper-layer
applications, but also makes the conflict resolution notoriously difficult.
This explains why existing CRDC designs are tricky, and hard to be generalized
to other data types. A design framework is in great need to guide the
systematic design of new CRDCs.
To address the challenges above, we propose the Remove-Win Design Framework.
The remove-win strategy for conflict resolution is simple but powerful. The
remove operation just wipes out the data item, no matter how complex the value
is. The user of the CRDC only needs to specify conflict resolution for
non-remove operations. This resolution is destructed to three basic cases and
are left as open terms in the CRDC design skeleton. Stubs containing
user-specified conflict resolution logics are plugged into the skeleton to
obtain concrete CRDC designs. We demonstrate the effectiveness of our design
framework via a case study of designing a conflict-free replicated priority
queue. Performance measurements also show the efficiency of the design derived
from our design framework.Comment: revised after submissio
- …