4 research outputs found
Space-filling Curves for High-performance Data Mining
Space-filling curves like the Hilbert-curve, Peano-curve and Z-order map
natural or real numbers from a two or higher dimensional space to a one
dimensional space preserving locality. They have numerous applications like
search structures, computer graphics, numerical simulation, cryptographics and
can be used to make various algorithms cache-oblivious. In this paper, we
describe some details of the Hilbert-curve. We define the Hilbert-curve in
terms of a finite automaton of Mealy-type which determines from the
two-dimensional coordinate space the Hilbert order value and vice versa in a
logarithmic number of steps. And we define a context-free grammar to generate
the whole curve in a time which is linear in the number of generated
coordinate/order value pairs, i.e. a constant time per coordinate pair or order
value. We also review two different strategies which enable the generation of
curves without the usual restriction to square-like grids where the side-length
is a power of two. Finally, we elaborate on a few applications, namely matrix
multiplication, Cholesky decomposition, the Floyd-Warshall algorithm, k-Means
clustering, and the similarity join
Approximation Fixpoint Theory and the Well-Founded Semantics of Higher-Order Logic Programs
We define a novel, extensional, three-valued semantics for higher-order logic
programs with negation. The new semantics is based on interpreting the types of
the source language as three-valued Fitting-monotonic functions at all levels
of the type hierarchy. We prove that there exists a bijection between such
Fitting-monotonic functions and pairs of two-valued-result functions where the
first member of the pair is monotone-antimonotone and the second member is
antimonotone-monotone. By deriving an extension of consistent approximation
fixpoint theory (Denecker et al. 2004) and utilizing the above bijection, we
define an iterative procedure that produces for any given higher-order logic
program a distinguished extensional model. We demonstrate that this model is
actually a minimal one. Moreover, we prove that our construction generalizes
the familiar well-founded semantics for classical logic programs, making in
this way our proposal an appealing formulation for capturing the well-founded
semantics for higher-order logic programs. This paper is under consideration
for acceptance in TPLP.Comment: Paper presented at the 34nd International Conference on Logic
Programming (ICLP 2018), Oxford, UK, July 14 to July 17, 2018 31 pages, LaTe
Enablers and Inhibitors in Causal Justifications of Logic Programs
To appear in Theory and Practice of Logic Programming (TPLP). In this paper
we propose an extension of logic programming (LP) where each default literal
derived from the well-founded model is associated to a justification
represented as an algebraic expression. This expression contains both causal
explanations (in the form of proof graphs built with rule labels) and terms
under the scope of negation that stand for conditions that enable or disable
the application of causal rules. Using some examples, we discuss how these new
conditions, we respectively call "enablers" and "inhibitors", are intimately
related to default negation and have an essentially different nature from
regular cause-effect relations. The most important result is a formal
comparison to the recent algebraic approaches for justifications in LP:
"Why-not Provenance" (WnP) and "Causal Graphs" (CG). We show that the current
approach extends both WnP and CG justifications under the Well-Founded
Semantics and, as a byproduct, we also establish a formal relation between
these two approaches
A Logic Framework for P2P Deductive Databases
This paper presents a logic framework for modeling the interaction among
deductive databases in a P2P (Peer to Peer) environment. Each peer joining a
P2P system provides or imports data from its neighbors by using a set of
mapping rules, i.e. a set of semantic correspondences to a set of peers
belonging to the same environment. Two different types of mapping rules are
defined: mapping rules allowing to import a maximal set of atoms not leading to
inconsistency (called maximal mapping rules) and mapping rules allowing to
import a minimal set of atoms needed to restore consistency (called minimal
mapping rules). Implicitly, the use of maximal mapping rules states it is
preferable to import as long as no inconsistencies arise; whereas the use of
minimal mapping rules states that it is preferable not to import unless a
inconsistency exists. The paper presents three different declarative semantics
of a P2P system: (i) the Max Weak Model Semantics, in which mapping rules are
used to import as much knowledge as possible} from a peer's neighborhood
without violating local integrity constraints; (ii) the Min Weak Model
Semantics, in which the P2P system can be locally inconsistent and the
information provided by the neighbors is used to restore consistency, that is
to only integrate the missing portion of a correct, but incomplete database;
(iii) the Max-Min Weak Model Semantics that unifies the previous two different
perspectives captured by the Max Weak Model Semantics and Min Weak Model
Semantics. This last semantics allows to characterize each peer in the
neighborhood as a resource used either to enrich (integrate) or to fix (repair)
the knowledge, so as to define a kind of integrate-repair strategy for each
peer.
Under consideration in Theory and Practice of Logic Programming (TPLP).Comment: Under consideration in Theory and Practice of Logic Programming
(TPLP