468 research outputs found
Distributed First Order Logic
Distributed First Order Logic (DFOL) has been introduced more than ten years
ago with the purpose of formalising distributed knowledge-based systems, where
knowledge about heterogeneous domains is scattered into a set of interconnected
modules. DFOL formalises the knowledge contained in each module by means of
first-order theories, and the interconnections between modules by means of
special inference rules called bridge rules. Despite their restricted form in
the original DFOL formulation, bridge rules have influenced several works in
the areas of heterogeneous knowledge integration, modular knowledge
representation, and schema/ontology matching. This, in turn, has fostered
extensions and modifications of the original DFOL that have never been
systematically described and published. This paper tackles the lack of a
comprehensive description of DFOL by providing a systematic account of a
completely revised and extended version of the logic, together with a sound and
complete axiomatisation of a general form of bridge rules based on Natural
Deduction. The resulting DFOL framework is then proposed as a clear formal tool
for the representation of and reasoning about distributed knowledge and bridge
rules
A Robust Logical and Computational Characterisation of Peer-to-Peer Database Systems
In this paper we give a robust logical and computational characterisation of peer-to-peer (p2p) database systems. We first define a precise model-theoretic semantics of a p2p system, which allows for local inconsistency handling. We then characterise the general computational properties for the problem of answering queries to such a p2p system. Finally, we devise tight complexity bounds and distributed procedures for the problem of answering queries in few relevant special cases
On Projectivity in Markov Logic Networks
Markov Logic Networks (MLNs) define a probability distribution on relational
structures over varying domain sizes. Many works have noticed that MLNs, like
many other relational models, do not admit consistent marginal inference over
varying domain sizes. Furthermore, MLNs learnt on a certain domain do not
generalize to new domains of varied sizes. In recent works, connections have
emerged between domain size dependence, lifted inference and learning from
sub-sampled domains. The central idea to these works is the notion of
projectivity. The probability distributions ascribed by projective models
render the marginal probabilities of sub-structures independent of the domain
cardinality. Hence, projective models admit efficient marginal inference,
removing any dependence on the domain size. Furthermore, projective models
potentially allow efficient and consistent parameter learning from sub-sampled
domains. In this paper, we characterize the necessary and sufficient conditions
for a two-variable MLN to be projective. We then isolate a special model in
this class of MLNs, namely Relational Block Model (RBM). We show that, in terms
of data likelihood maximization, RBM is the best possible projective MLN in the
two-variable fragment. Finally, we show that RBMs also admit consistent
parameter learning over sub-sampled domains
- …