5 research outputs found
Connected Components and Disjunctive Existential Rules
In this paper, we explore conjunctive query rewriting, focusing on queries
containing universally quantified negation within the framework of disjunctive
existential rules. We address the undecidability of the existence of a finite
and complete UCQ-rewriting and the identification of finite unification sets
(fus) of rules. We introduce new rule classes, connected linear rules and
connected domain restricted rules, that exhibit the fus property for
existential rules. Additionally, we propose disconnected disjunction for
disjunctive existential rules to achieve the fus property when we extend the
introduced rule fragments to disjunctive existential rules. We present
ECOMPLETO, a system for efficient query rewriting with disjunctive existential
rules, capable of handling UCQs with universally quantified negation. Our
experiments demonstrate ECOMPLETO's consistent ability to produce finite
UCQ-rewritings and describe the performance on different ontologies and
queries.Comment: 23 pages, 4 figure
Generating by Understanding: Neural Visual Generation with Logical Symbol Groundings
Despite the great success of neural visual generative models in recent years,
integrating them with strong symbolic knowledge reasoning systems remains a
challenging task. The main challenges are two-fold: one is symbol assignment,
i.e. bonding latent factors of neural visual generators with meaningful symbols
from knowledge reasoning systems. Another is rule learning, i.e. learning new
rules, which govern the generative process of the data, to augment the
knowledge reasoning systems. To deal with these symbol grounding problems, we
propose a neural-symbolic learning approach, Abductive Visual Generation
(AbdGen), for integrating logic programming systems with neural visual
generative models based on the abductive learning framework. To achieve
reliable and efficient symbol assignment, the quantized abduction method is
introduced for generating abduction proposals by the nearest-neighbor lookups
within semantic codebooks. To achieve precise rule learning, the contrastive
meta-abduction method is proposed to eliminate wrong rules with positive cases
and avoid less-informative rules with negative cases simultaneously.
Experimental results on various benchmark datasets show that compared to the
baselines, AbdGen requires significantly fewer instance-level labeling
information for symbol assignment. Furthermore, our approach can effectively
learn underlying logical generative rules from data, which is out of the
capability of existing approaches
RDF graph validation using rule-based reasoning
The correct functioning of Semantic Web applications requires that given RDF graphs adhere to an expected shape. This shape depends on the RDF graph and the application's supported entailments of that graph. During validation, RDF graphs are assessed against sets of constraints, and found violations help refining the RDF graphs. However, existing validation approaches cannot always explain the root causes of violations (inhibiting refinement), and cannot fully match the entailments supported during validation with those supported by the application. These approaches cannot accurately validate RDF graphs, or combine multiple systems, deteriorating the validator's performance. In this paper, we present an alternative validation approach using rule-based reasoning, capable of fully customizing the used inferencing steps. We compare to existing approaches, and present a formal ground and practical implementation "Validatrr", based on N3Logic and the EYE reasoner. Our approach - supporting an equivalent number of constraint types compared to the state of the art - better explains the root cause of the violations due to the reasoner's generated logical proof, and returns an accurate number of violations due to the customizable inferencing rule set. Performance evaluation shows that Validatrr is performant for smaller datasets, and scales linearly w.r.t. the RDF graph size. The detailed root cause explanations can guide future validation report description specifications, and the fine-grained level of configuration can be employed to support different constraint languages. This foundation allows further research into handling recursion, validating RDF graphs based on their generation description, and providing automatic refinement suggestions
Constructive approaches to Program Induction
Search is a key technique in artificial intelligence, machine learning and Program Induction. No
matter how efficient a search procedure, there exist spaces that are too large to search effectively
and they include the search space of programs. In this dissertation we show that in the context
of logic-program induction (Inductive Logic Programming, or ILP) it is not necessary to search
for a correct program, because if one exists, there also exists a unique object that is the most
general correct program, and that can be constructed directly, without a search, in polynomial
time and from a polynomial number of examples. The existence of this unique object, that we
term the Top Program because of its maximal generality, does not so much solve the problem
of searching a large program search space, as it completely sidesteps it, thus improving the
efficiency of the learning task by orders of magnitude commensurate with the complexity of a
program space search.
The existence of a unique Top Program and the ability to construct it given finite resources
relies on the imposition, on the language of hypotheses, from which programs are constructed,
of a strong inductive bias with relevance to the learning task. In common practice, in machine
learning, Program Induction and ILP, such relevant inductive bias is selected, or created,
manually, by the human user of a learning system, with intuition or knowledge of the problem
domain, and in the form of various kinds of program templates. In this dissertation we show
that by abandoning the reliance on such extra-logical devices as program templates, and instead
defining inductive bias exclusively as First- and Higher-Order Logic formulae, it is possible to
learn inductive bias itself from examples, automatically, and efficiently, by Higher-Order Top
Program construction.
In Chapter 4 we describe the Top Program in the context of the Meta-Interpretive Learning
approach to ILP (MIL) and describe an algorithm for its construction, the Top Program
Construction algorithm (TPC). We prove the efficiency and accuracy of TPC and describe
its implementation in a new MIL system called Louise. We support theoretical results with
experiments comparing Louise to the state-of-the-art, search-based MIL system, Metagol, and
find that Louise improves Metagol’s efficiency and accuracy. In Chapter 5 we re-frame MIL as
specialisation of metarules, Second-Order clauses used as inductive bias in MIL, and prove that
problem-specific metarules can be derived by specialisation of maximally general metarules, by
MIL. We describe a sub-system of Louise, called TOIL, that learns new metarules by MIL and
demonstrate empirically that the metarules learned by TOIL match those selected manually,
while maintaining the accuracy and efficiency of learning.
iOpen Acces