1,457 research outputs found
On finitely recursive programs
Disjunctive finitary programs are a class of logic programs admitting
function symbols and hence infinite domains. They have very good computational
properties, for example ground queries are decidable while in the general case
the stable model semantics is highly undecidable. In this paper we prove that a
larger class of programs, called finitely recursive programs, preserves most of
the good properties of finitary programs under the stable model semantics,
namely: (i) finitely recursive programs enjoy a compactness property; (ii)
inconsistency checking and skeptical reasoning are semidecidable; (iii)
skeptical resolution is complete for normal finitely recursive programs.
Moreover, we show how to check inconsistency and answer skeptical queries using
finite subsets of the ground program instantiation. We achieve this by
extending the splitting sequence theorem by Lifschitz and Turner: We prove that
if the input program P is finitely recursive, then the partial stable models
determined by any smooth splitting omega-sequence converge to a stable model of
P.Comment: 26 pages, Preliminary version in Proc. of ICLP 2007, Best paper awar
Propagators and Solvers for the Algebra of Modular Systems
To appear in the proceedings of LPAR 21.
Solving complex problems can involve non-trivial combinations of distinct
knowledge bases and problem solvers. The Algebra of Modular Systems is a
knowledge representation framework that provides a method for formally
specifying such systems in purely semantic terms. Formally, an expression of
the algebra defines a class of structures. Many expressive formalism used in
practice solve the model expansion task, where a structure is given on the
input and an expansion of this structure in the defined class of structures is
searched (this practice overcomes the common undecidability problem for
expressive logics). In this paper, we construct a solver for the model
expansion task for a complex modular systems from an expression in the algebra
and black-box propagators or solvers for the primitive modules. To this end, we
define a general notion of propagators equipped with an explanation mechanism,
an extension of the alge- bra to propagators, and a lazy conflict-driven
learning algorithm. The result is a framework for seamlessly combining solving
technology from different domains to produce a solver for a combined system.Comment: To appear in the proceedings of LPAR 2
Pseudo-contractions as Gentle Repairs
Updating a knowledge base to remove an unwanted consequence is a challenging task. Some of the original sentences must be either deleted or weakened in such a way that the sentence to be removed is no longer entailed by the resulting set. On the other hand, it is desirable that the existing knowledge be preserved as much as possible, minimising the loss of information. Several approaches to this problem can be found in the literature. In particular, when the knowledge is represented by an ontology, two different families of frameworks have been developed in the literature in the past decades with numerous ideas in common but with little interaction between the communities: applications of AGM-like Belief Change and justification-based Ontology Repair. In this paper, we investigate the relationship between pseudo-contraction operations and gentle repairs. Both aim to avoid the complete deletion of sentences when replacing them with weaker versions is enough to prevent the entailment of the unwanted formula. We show the correspondence between concepts on both sides and investigate under which conditions they are equivalent. Furthermore, we propose a unified notation for the two approaches, which might contribute to the integration of the two areas
Modern Coding Theory: The Statistical Mechanics and Computer Science Point of View
These are the notes for a set of lectures delivered by the two authors at the
Les Houches Summer School on `Complex Systems' in July 2006. They provide an
introduction to the basic concepts in modern (probabilistic) coding theory,
highlighting connections with statistical mechanics. We also stress common
concepts with other disciplines dealing with similar problems that can be
generically referred to as `large graphical models'.
While most of the lectures are devoted to the classical channel coding
problem over simple memoryless channels, we present a discussion of more
complex channel models. We conclude with an overview of the main open
challenges in the field.Comment: Lectures at Les Houches Summer School on `Complex Systems', July
2006, 44 pages, 25 ps figure
Coherent Integration of Databases by Abductive Logic Programming
We introduce an abductive method for a coherent integration of independent
data-sources. The idea is to compute a list of data-facts that should be
inserted to the amalgamated database or retracted from it in order to restore
its consistency. This method is implemented by an abductive solver, called
Asystem, that applies SLDNFA-resolution on a meta-theory that relates
different, possibly contradicting, input databases. We also give a pure
model-theoretic analysis of the possible ways to `recover' consistent data from
an inconsistent database in terms of those models of the database that exhibit
as minimal inconsistent information as reasonably possible. This allows us to
characterize the `recovered databases' in terms of the `preferred' (i.e., most
consistent) models of the theory. The outcome is an abductive-based application
that is sound and complete with respect to a corresponding model-based,
preferential semantics, and -- to the best of our knowledge -- is more
expressive (thus more general) than any other implementation of coherent
integration of databases
Static and dynamic semantics of NoSQL languages
We present a calculus for processing semistructured data that spans
differences of application area among several novel query languages, broadly
categorized as "NoSQL". This calculus lets users define their own operators,
capturing a wider range of data processing capabilities, whilst providing a
typing precision so far typical only of primitive hard-coded operators. The
type inference algorithm is based on semantic type checking, resulting in type
information that is both precise, and flexible enough to handle structured and
semistructured data. We illustrate the use of this calculus by encoding a large
fragment of Jaql, including operations and iterators over JSON, embedded SQL
expressions, and co-grouping, and show how the encoding directly yields a
typing discipline for Jaql as it is, namely without the addition of any type
definition or type annotation in the code
Reductive Explanation and the Construction of Quantum Theories
I argue that philosophical issues concerning reductive explanations help constrain the construction of quantum theories with appropriate state spaces. I illustrate this general proposal with two examples of restricting attention to physical states in quantum theories: regular states and symmetry-invariant states
- …